Is AI Progress Hitting a Wall?

Pile-T5


Was this newsletter forwarded to you? Sign up to get it in your inbox.


A wave of recent articles proclaims the death of deep learning. Leaked reports suggest OpenAI’s new model Orion finished training without showing nearly the improvement that GPT-4 achieved over GPT-3. Critics like Gary Marcus are already writing gloating eulogies.

So, is AI progress slowing down? No. Let me tell you why.

My nephew is 2 years old. Over the last year or so he’s rapidly become much more mobile. First, he learned to crawl, sticking his butt into the air and pushing with his knees to zoom across the room. Soon after that—and much to his parents’ chagrin—he began pulling himself up the sides of chairs and coffee tables. Then in the blink of an eye he was walking, unsteadily at first, but with increasing confidence. Then he was running! And waving his arms in the air as he ran, like a clumsy ballerina.

In the past month, though, this rapid progress began to decline. He’s no longer getting nightly mobility upgrades. You can still see progress, but it’s measured in weeks rather than days. It’s more subtle, too. Visible in a slightly more fluid and confident gait, or the dexterity to ride a scooter.

So yes, as far his movement goes, he’s mostly done making exponential progress. However, this is not cause for alarm. No one is sending concerned messages to the family group chat wondering if his growth is somehow stunted.

Why? He’s now growing quickly along an entirely new dimension: learning to say no. He doesn’t want to listen to the music his parents have on; he wants to hear “Baby Shark.” He doesn’t want to eat what his parents have made for him; he wants noodles. He doesn’t want me to read him his favorite book; he wants Mama.

For my nephew, it’s a completely new paradigm of exponential improvement—one he’s transitioned to smoothly from the previous one.

AI is quite similar. 

It is factually accurate to say that AI progress may be slowing along one dimension: pre-training. (This is the practice of wresting exponential performance improvements from language models by training them with more data and more compute.) But if that’s all you say, you’re fundamentally missing the point. It would be like saying my nephew’s mobility growth is slowing down without also mentioning the rapid progress in his language skills and sense of self.

It does appear that pre-training is reaching some kind of diminishing marginal returns (though even this hasn’t been totally maxed out, according to sources I’ve talked to). But, regardless of which perspective you believe, these headlines miss the bigger picture.

What is not under debate is that we’ve just come across an entirely new paradigm that creates an untapped opportunity for exponential progress: inference time compute. In this paradigm, first introduced by OpenAI’s o1 model, we can get better results from models by letting them “think” for a longer time before they respond to a prompt. This gives us the ability to spend computing resources not just in pre-training but on inference—which creates new territory for exponential scaling.



Your AI assistant is ready to take over

Drowning in tasks while precious hours slip away? Your personal AI assistant is ready to rescue your workflow—and it’s completely free.

  • Master AI delegation in minutes with our comprehensive User Guide Template
  • Organize complex tasks effortlessly using our Task Delegation System
  • Save up to 10+ hours weekly by leveraging AI automation effectively
  • Implement proven strategies used by top productivity experts

Stop wrestling with overwhelming to-do lists and join thousands of professionals who have revolutionized their workday with AI assistance.

My understanding from people inside of the big AI labs is that progress is continuing at a breakneck pace. Yes, depending on who you talk to, we may have squeezed most of the juice we can out of pre-training. But if you zoom out from that single question and look at things holistically, rapid advances are continuing to happen—and will continue for the foreseeable future. 

Just like my nephew didn’t stop growing when he mastered walking—he simply shifted to mastering language—AI progress isn’t slowing down. It’s just switching to newly discovered paradigms of development.

Plan accordingly.


Dan Shipper is the cofounder and CEO of Every, where he writes the Chain of Thought column and hosts the podcast AI & I. You can follow him on X at @danshipper and on LinkedIn, and Every on X at @every and on LinkedIn.

We also build AI tools for readers like you. Automate repeat writing with Spiral. Organize files automatically with Sparkle. Write something great with Lex.





Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.