Cool but dangerous: New Claude AI model can control your computer

Abstract line art with white cursor


Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

Anthropic has rolled out the Claude 3.5 Sonnet AI model with a feature in the public beta that can operate a computer simply by looking at what’s on the screen.

The API also includes a new feature called “computer use,” which allows developers to command Claude to work on a computer like a human would.

It is the first major AI model to take the next step and take control of a computer to perform meaningful tasks. While it may seem interesting, it sparks a big concern: should we trust AI to take over our computers? 

Anthropic’s new AI tool can control your computer on its own 

Anthropic has developed an advanced version of its AI model, Claude 3.5 Sonnet, designed to automate desktop tasks through a new “Computer Use” API.

This API allows the AI to simulate human actions like keystrokes, mouse movements, and clicks, enabling it to interact with any desktop software. 

Anthropic trained Claude to interpret screenshots, track cursor movements, and perform tasks based on your prompts. Anthropic views its approach as an “action-execution layer” that enables AI to carry out specific desktop-level commands under human supervision. 

While similar automation tools exist, Anthropic claims that Claude 3.5 Sonnet is more robust, excelling in coding tasks and self-correcting when encountering issues. 

However, it still has limitations, particularly with tasks like scrolling, zooming, and completing multi-step processes. Despite these challenges, companies like Replit and Canva are already exploring its potential applications.

However, the technology is not yet flawless, and developers are advised to begin testing with simpler, low-risk tasks.

But is it safe? 

Anthropic acknowledges the potential risks of its new 3.5 Sonnet model but argues that the benefits of real-world use outweigh these concerns. The company has implemented safety measures, such as not training the model on user data or allowing it to access the web during training.

Additionally, Anthropic developed classifiers to steer the model away from high-risk activities like posting on social media, creating accounts, or interacting with government websites, aiming to prevent harmful misuse.

What are your thoughts on this new Claude AI model? Would you trust AI to perform actions on your computer? Tell us what you think in the comments down below, and follow us on our Twitter and Facebook for more.





Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.