“We’ll always have humans involved in the decision to employ force.”
Kill Chain Engage
The Pentagon is bullish on recent AI advances, saying they’re helping kill people faster than ever before.
In an interview with TechCrunch, the Pentagon’s chief digital and AI officer, Radha Plumb, admitted in a mask-off moment that the technology is helping to expedite the way the military kills.
“We obviously are increasing the ways in which we can speed up the execution of kill chain,” Plumb told the website in an interview, “so that our commanders can respond in the right time to protect our forces.”
According to a Mitchell Institute white paper from 2023, “kill chain” is militaryspeak for “the process militaries use to attack targets in the battlespace.”
“The kill chain can be broken down into specific steps — find, fix, track, target, engage, and assess — that enable planners to build and task forces for combat operations,” the paper explains.
Military Precision
In short, the Pentagon’s top AI officer bragged off-handedly about using AI to expedite the process of ending lives. Though she didn’t get into the specifics, Plumb did admit that AI is primarily used in the planning and strategizing phases of military kill chains.
“Playing through different scenarios is something that generative AI can be helpful with,” Plumb, who previously worked at Facebook and Google, told TechCrunch. “It allows you to take advantage of the full range of tools our commanders have available, but also think creatively about different response options and potential trade-offs in an environment where there’s a potential threat, or series of threats, that need to be prosecuted.”
That’s a wild admission: the Pentagon is not only using AI to dream up scenarios that would require lethal force, but also boasting about it on the record. There are few more dystopian usages of the technology, and here one of the military’s ranking officials is laying it out for the average reader.
Plumb said the Pentagon doesn’t buy or operate fully autonomous weaponry, meaning that for now there are still humans involved in any lethal decision-making.
“As a matter of both reliability and ethics, we’ll always have humans involved in the decision to employ force,” she insisted, “and that includes for our weapon systems.”
We’re not sure about you, but that’s not very reassuring — and with AI lover Donald Trump back in the Oval Office, even that modicum of oversight could soon be under attack.
More on military AI: OpenAI Strikes Deal With Military Contractor to Provide AI for Attack Drones
Source link
lol