OpenAI Appears to Have Accidentally Leaked Its Upcoming o1 Model to Anyone With a Certain Web Address

OpenAI Appears to Have Accidentally Leaked Its Upcoming o1 Model to Anyone With a Certain Web Address


Could this be the real thing?

Mortal Domains

The full version of OpenAI’s latest AI model called 01 appears to have leaked on Friday — only for the company to shut it down a mere two hours later.

As Tom’s Guide reports, a number of users on X-formerly-Twitter discovered that a simple tweak to the URL allowed them to access the AI model.

The model was first announced in September and has only been available in “preview” form to paying users since then.

But by changing the URL, users claimed to have found a workaround to get access to the full thing. OpenAI, however, has yet to comment on the matter, and it’s still unclear if they were indeed chatting with the company’s long-awaited AI model (Futurism has reached out to OpenAI for clarification.)

Glimpses users claim to have gotten, though suggest that the full release could mark a serious improvement over anything we’ve seen from the company previously.

Let ‘Er Rip

Users initially impressed by the purported model’s capabilities, from solving a complex math problem to an image puzzle.

One user found that the AI model could spit out a “full 01 Chain of Thought” after being asked to analyze a picture of a recent SpaceX launch.

Another Reddit user found that it “managed to process a massive JSON dump that wasn’t feasible with o1-preview due to its token limitations,” referring to a common file format that coders used to store human-readable text.

We still don’t know when OpenAI will make the full version of its o1 model available to users. As Tom’s Guide points out, the Sam Altman-led firm may be waiting out the current US presidential election this week.

But even in its “preview form,” the o1 model has already impressed experts with its improved ability to solve standardized tests and new chain-of-thought reasoning.

Despite some impressive benchmarks, though, OpenAI recently found that its ability to provide correct — and not “hallucinated” — answers still leaves plenty to be desired.

Whether that will change at all with the release of the full version of the o1 model remains to be seen.

More on OpenAI: OpenAI Research Finds That Even Its Best Models Give Wrong Answers a Wild Proportion of the Time



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.