The AI Startup Anthropic, Which Is Always Talking About How Ethical It Is, Just Partnered With Palantir

The AI Startup Anthropic, Which Is Always Talking About How Ethical It Is, Just Partnered With Palantir


So much for putting safety first.

Safety Third

Anthropic, the AI company that touts itself as the safety-prioritizing alternative to other AI firms like OpenAI — from which it’s poached many executives — has partnered with shadowy defense contractor Palantir.

The AI company is also teaming up with Amazon Web Services to bring its AI chatbot Claude to US intelligence and defense agencies, an alliance that feels at odds with Anthropic’s claim of putting “safety at the frontier.”

According to a press release, the partnership supports the US military-industrial complex by “processing vast amounts of complex data rapidly, elevating data-driven insights, identifying patterns and trends more effectively, streamlining document review and preparation, and helping US officials to make more informed decisions in time-sensitive situations.”

The situation is especially peculiar considering AI chatbots have long garnered a reputation for their tendency to leak sensitive information and “hallucinate” facts.

“Palantir is proud to be the first industry partner to bring Claude models to classified environments,” said Palantir CTO Shyam Sankar in a statement.

“This will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource-intensive tasks and boost operational efficiency across departments,” Anthropic head of sales Kate Earle Jensen added.

All Access

Anthropic does technically allow its AI tools to be used for “identifying covert influence or sabotage campaigns” or “providing warning in advance of potential military activities,” according to its recently expanded terms of service.

Since June, the terms of service conveniently carve out contractual exceptions for military and intelligence use, as TechCrunch points out.

The latest partnership allows Claude access to information that falls under the “secret” Palantir Impact Level 6 (IL6), which is one step below “top secret” in the Defense Department, per TechCrunch. Anything deemed IL6 can contain data critical to national security.

In other words, Anthropic and Palantir may not have handed the AI chatbot the nuclear codes — but it will now have access to some spicy intel.

It also lands Anthropic in ethically murky company. Case in point, Palantir scored a $480 million contract from the US Army to build out an AI-powered target identification system called Maven Smart System earlier this year. The overarching Project Maven has previously proven incredibly controversial in the tech sector.

However, how a hallucinating AI chatbot fits into all of this remains to be seen. Is Anthropic simply following the money as it prepares to raise enough funds to secure a rumored $40 billion valuation?

It’s a disconcerting partnership that sets up the AI industry’s growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech’s many inherent flaws — and even more so when lives could be at stake.

More on Anthropic: Anthropic Now Lets Claude Take Control of Your Entire Computer



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.