Telling ChatGPT to “think step by step” doesn’t (actually) help much

Telling ChatGPT to "think step by step" doesn't (actually) help much



Chain of thought prompting (Telling an LLM to “think step by step”) has been hailed as a powerful technique for eliciting complex reasoning from ChatGPT.

The idea is simple: provide step-by-step examples of how to solve a problem, and the model will learn to apply that reasoning to new problems.

But a new study says otherwise, finding that chain of thought’s successes are far more limited and brittle than widely believed.

This study is really making waves on AImodels.fyi, and especially on Twitter. Remember, people release thousands of AI papers, models, and tools daily. Only a few will be revolutionary. We scan repos, journals, and social media to bring them to you in bite-sized recaps, and this is one paper that’s broken through.

If you want someone to monitor and summarize these breakthroughs for you, you should become a paid subscriber. And for our pro members, read on to learn about why CoT might be a waste of tokens!



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.