Did AI Already Peak and Now It’s Getting Dumber?

Did AI Already Peak and Now It’s Getting Dumber?


“After all the hype for me, it was kind of a big disappointment.”

Dumb and Dumber

If you think AI platforms like OpenAI’s ChatGPT seem dumber than before, you aren’t alone.

In a blistering opinion column for Computerworld, writer Steven Vaughan-Nichols says he’s noticed that all the major publicly-accessible AI models — think brand-name flagships like ChatGPT and Claude — don’t work as well as previous versions.

“Indeed, all too often, the end result is annoying and obnoxiously wrong,” he writes. “Worse still, it’s erratically wrong. If I could count on its answers being mediocre, but reasonably accurate, I could work around it. I can’t.”

In a Business Insider article that he flagged, users posting to the OpenAI developer forum had also noticed a significant decline in accuracy after the latest version of GPT was released last year.

“After all the hype for me, it was kind of a big disappointment,” one user wrote in June this year.

Model Citizen

Needless to say, this isn’t how things are supposed to work. Newer versions of software are generally supposed to be better than the ones they replace. So what’s behind the drop in quality?

One possibility is that these AIs were never quite as strong as they seemed — remember that their training data was culled from places like Reddit and Twitter — but we were wowed by the fact that they could function at all.

But another likely cause, in Vaughan-Nichols’ reckoning, is that AIs are now scraping AI-generated information in addition to that social media drek, and it’s subtly eating away at their capabilities.

Vaughan-Nichols is referring to the idea of “model collapse,” a phenomenon when AI models deteriorate when they’re fed AI-generated data, which is becoming increasingly prevalent as the Internet becomes more awash in AI-generated slop, from images to text.

“We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear,” reads one Nature paper published last month.

This is going to become an increasing problem as the human world runs out of high-quality content, with some experts estimating it’ll happen by 2026.

Of course, it’s also possible we’ll rediscover the value of priceless and irreplaceable work done by humans — but we’re not holding our breath.

More on AI: ChatGPT is Absolutely Atrocious at Being a Doctor



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.