OpenAI is reportedly preparing to launch a new artificial intelligence product with advanced features that can solve problems and tasks that are beyond current AI models.
First reported by The Information, the new model is internally referred to as “Strawberry” and is said, among other things, to be able to solve math problems it has never encountered, perform high-level tasks such as developing marketing strategies, and even solve complex word puzzles. In one example, the new model was able to solve the New York Times word puzzle “Connections.”
Previous claims about the model include that it scores over 90% of the MATH benchmark, a collection of championship-level math problems. By comparison, GPT-4 scored only 53% on the test and GPT-4o achieved 76.6%. As of July, GPT-4o had the highest MATH benchmark score of an AI model currently available, meaning that if Strawberry delivers as promised, it will position OpenAI well ahead of its competitors.
Where the news perhaps becomes more interesting is the history of its development, because as much as the name “Strawberry” sounds sort of nice and noncontroversial, it wasn’t always known by the name. The model was previously known internally as Q* (pronounced Q-Star) and it was critical to the brief period of chaos that descended on OpenAI last year, including the ouster of Chief Executive Officer Sam Altman, before he returned to the company days later.
As reported in November, Altman’s ouster was said to have been influenced by concerns about a major AI breakthrough achieved by the company. A group of OpenAI researchers wrote a letter to the board before Altman’s ouster highlighting the potential risks that could be posed by advanced AI and, specifically, Q*, the model now known as Strawberry.
Among the claims made at the time include that Q* could represent a major breakthrough in the journey toward building artificial general intelligence. AGI is an advanced form of AI that possesses the ability to understand, learn and apply knowledge across a wide range of tasks, similar to human cognitive abilities.
The potential risk presented by AGI comes down to the potential for loss of control over AI systems, leading to unintended consequences and the possibility of AGI developing goals misaligned with human values, which could result in significant harm. Put less technically, there’s a worry that an AGI model could turn into Skynet from the “Terminator” movies.
According to The Information, Strawberry could be released sometime in the fall.
Image: SiliconANGLE/Ideogram
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU
Source link
lol