A lot of ML papers have popularized a technique called chain-of-thought (CoT) prompting, where AI models are asked to break down their thinking step by step. While this approach often improves performance, researchers from Princeton University and NYU have discovered that in certain scenarios, it can significantly harm LLM performance – just as overthinking can sometimes impair human performance!
Let’s see what the researchers found and how to avoid this in our own work.
Source link
lol