OpenAI and Anthropic agree to share their models with the US AI Safety Institute

OpenAI and Anthropic agree to share their models with the US AI Safety Institute


OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.

The US AI Safety Institute didn’t mention other companies tackling AI. But in a statement to Engadget, a Google spokesperson told Engadget the company is in discussions with the agency and will share more info when it’s available. This week, Google began rolling out updated chatbot and image generator models for Gemini.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” Elizabeth Kelly, director of the US AI Safety Institute, wrote in a statement. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The US AI Safety Institute is part of the National Institute of Standards and Technology (NIST). It creates and publishes guidelines, benchmark tests and best practices for testing and evaluating potentially dangerous AI systems. “Just as AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions,” Vice President Kamala Harris said in late 2023 after the agency was established.

The first-of-its-kind agreement is through a (formal but non-binding) Memorandum of Understanding. The agency will receive access to each company’s “major new models” ahead of and following their public release. The agency describes the agreements as collaborative, risk-mitigating research that will evaluate capabilities and safety. The US AI Safety Institute will also collaborate with the UK AI Safety Institute.

It comes as federal and state regulators try to establish AI guardrails while the rapidly advancing technology is still nascent. On Wednesday, the California state assembly approved an AI safety bill (SB 10147) that mandates safety testing for AI models that cost more than $100 million to develop or require a set amount of computing power. The bill requires AI companies to have kill switches that can shut down the models if they become “unwieldy or uncontrollable.”

Unlike the non-binding agreement with the federal government, the California bill would have some teeth for enforcement. It gives the state’s attorney general license to sue if AI developers don’t comply, especially during threat-level events. However, it still requires one more process vote — and the signature of Governor Gavin Newsom, who will have until September 30 to decide whether to give it the green light.

Update, August 29, 2024, 4:53 PM ET: This story has been updated to add a response from a Google spokesperson.



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.