nist

How a Trump Win Could Unleash Dangerous AI

How a Trump Win Could Unleash Dangerous AI

The reporting requirements are essential for alerting the government to potentially dangerous new capabilities in increasingly powerful AI models, says a US government official who works on AI issues. The official, who requested anonymity to speak freely, points to OpenAI’s admission about its latest model’s “inconsistent refusal of requests to synthesize nerve agents.”The official says the reporting requirement isn’t overly burdensome. They argue that, unlike AI regulations in the European Union and China, Biden’s EO reflects “a very broad, light-touch approach that continues to foster innovation.”Nick Reese, who served as the Department of Homeland Security’s first director of emerging technology…
Read More
OpenAI and Anthropic agree to share their models with the US AI Safety Institute

OpenAI and Anthropic agree to share their models with the US AI Safety Institute

OpenAI and Anthropic have agreed to share AI models — before and after release — with the US AI Safety Institute. The agency, established through an executive order by President Biden in 2023, will offer safety feedback to the companies to improve their models. OpenAI CEO Sam Altman hinted at the agreement earlier this month.The US AI Safety Institute didn’t mention other companies tackling AI. But in a statement to Engadget, a Google spokesperson told Engadget the company is in discussions with the agency and will share more info when it’s available. This week, Google began rolling out updated chatbot…
Read More
No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.