We are defaulting toward a world in which artificial intelligences are built to serve under national flags — as pawns (or perhaps chess engines) in support of rival national goals.
Should we be outraged about this trend, or grimly realistic? Resigned or rebellious? Either way, this conscription of AI should be more widely known, whether or not it can or should be opposed.
Few, though, will want to discuss governance or policy unless there is something truly urgent happening — that’s just how it is with the human brain — so first, let’s talk timelines of an AI worthy of the word “intelligence.”
Those leading today’s top AI labs generally believe that a model “smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering…” is likely due by “2026 or 2027,” according to Anthropic CEO Dario Amodei. That’s 12 to 36 months from the time of writing.
A broader 2024 survey of 2,000 AI researchers put the chance of a 2027 date for human-level machine intelligence at merely 10% (!), which was still a huge update toward such early dates compared to previous iterations of the same survey.
Note that these more general artificial intelligences (or Nobel-worthy constructs) are distinct from so-called “superintelligence” — an action-taking agent more capable and intelligent than entire organizations or, in some definitions, the collective of humanity.
The date for such a superintelligence, according to OpenAI’s Sam Altman, is four to 15 years from now (2028 to 2039). Nobel winner Demis Hassabis, CEO of Google DeepMind, whose stated purpose is “to solve intelligence first, then use it to solve everything else,” believes their mission will be complete “within a decade.”
All of this might be an elaborate sales pitch or groupthink. Or the dates might be off by a few years. But the prospect of automating evolved human intelligence — a genius IQ in a thumb drive (or missile) — is not something that can be completely dismissed anymore.
In other media landscapes, the news that a model like o1 from OpenAI “exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems” would be dominating headlines.
OK, so LLMs and AI more broadly are getting pretty impressive, and improving fast (e.g. by verifying their own answers and thinking them over). But why should we think these advances are being nationalized?
After all, Alphafold 3, which can “[predict] the structure and interactions of all of life’s molecules,” was just made freely available, work overseen by Deepmind’s Demis Hassabis. Almost anyone can access versions of ChatGPT for free, and Meta’s AIs are open-source. So shouldn’t we expect AI to be surrounded by fewer borders?
Not so much going forward. Anthropic’s Amodei, whose company is behind one of the most advanced language models, Claude 3.5 Sonnet, argued in October 2024 that “if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome” and that “it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.”
Not excited by that prospect? We have no choice, according to Sam Altman. In July, he said that “the urgent question of our time” is, “Who will control the future of AI?” He argued that it had to be America: “The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us.”
Indeed, “If we want a more democratic world, history tells us our only choice is to develop an AI strategy that will help create it, and that the nations and technologists who have a lead have a responsibility to make that choice — now.”
Altman proposed some prosaic policies: investment in cybersecurity and infrastructure, as well as clear rules for international investment and exports. But the underlying argument is that “democratic AI” must be protected, subsidized, and regulated in order to stay ahead of “authoritarian AI.”
This vision of conflict may be having practical effects. The New York Times recently reported that Meta has now allowed their models to be used by the US military “in a shift from its policy that prohibited the use of its technology for such efforts.” OpenAI quietly adjusted its policy in a similar direction in January. Microsoft, Amazon, and even Anthropic are now working with US defense and intelligence agencies.
Demis Hassabis, a Brit, has been more reticent to frame AI progress in national security terms. But the White House has not.
A memo recently issued by the Biden administration stated boldly in its title the goal of “Harnessing Artificial Intelligence to Fulfill National Security Objectives.” (Who expects the forthcoming Trump administration to be less America-First?)
According to the White House, the race with China, among others, is on: “Although the United States has benefited from a head start in AI, competitors are working hard to catch up … and may soon devote resources to research and development that United States AI developers cannot match without appropriately supportive Government policies and action. It is therefore the policy of the United States Government to enhance innovation … by bolstering key drivers of AI progress, such as technical talent and computational power.”
Needless to say, more primitive “AI” has already been used for years on battlefields, from semi-autonomous drone swarms in the Russia-Ukraine war to the IDF’s use of “an AI targeting system with little human oversight and a permissive policy for casualties,” according to +972, a magazine founded in Tel Aviv.
Technology has always been co-opted for war, but truly intelligent AI, let alone a superintelligence, is a different beast entirely — one we would be wise not to unleash on the battlefield.
Is it naïve to take a moment to picture a world in which cooperating nations gather their best talent and proportionally pool their resources to incrementally develop and understand powerful AI? Where AI is not used to advance a worldview, or win at war? Do we not owe it to future generations, and ourselves, to at least attempt a CERN for AI?
Even if you disagree, the steady “nationalization” of AI is a fast-developing story that deserves more attention.
Source link
lol