Claude AI can now control your computer

Deal Score0
Deal Score0

Claude AI can control your computer

Claude AI can control your computer

Big day for Anthropic, which today unveils Claude 3.5 Haikua direct response to ChatGPT-4o Mini or Gemini 1.5 Flash, offering, according to a benchmark, superior performance to these two models, even if for the moment, it does not support images and is primarily a textual model. . Additionally, Anthropic has launched a major update for Claude 3.5 Sonnet, enabling an impressive benchmark that ranks Claude as the best model available on the market today.

Advertising, your content continues below

The world of the future, today

In addition to these additions, Claude unveiled a new API available in public beta called “Computer Use”. Through the latter, the Claude AI user can allow artificial intelligence to directly take control of their computer, and thus perform various actions in their place. Claude can therefore look at your screen, move the cursor, click and write text. The idea is to release this API today in beta in order to collect as much information as possible and thus improve the service over time.

Claude AI can control your computer

Claude AI can control your computer

© Anthropic

To present this new function, Anthropic writes this: “with Computer Use we are trying something fundamentally new. Instead of creating specific tools to help Claude complete individual tasks, we teach him general IT skills, allowing him to use a wide range of standard tools and software originally designed for people“. Note that this API can be used by professionals as well: “developers can integrate this API to allow Claude to translate instructions (for example, “use data from my computer and available online to fill out this form”) into computer commands (for example, “check a spreadsheet”); “moves the cursor to open a web browser”; “accesses relevant web pages”; “fills out a form with data from these pages”).“.

What about security?

Obviously, such a functionality raises questions about the confidentiality and security of such a function, which will ultimately offer unprecedented freedom to AI models, since they will be able to directly control a user's computer. The greatest vulnerability of such a function therefore lies in “prompt injection”, a cyberattack whose aim is to provide malicious requests to an AI. With such a method, it would be possible to control a user's computer remotely, and thus recover a whole bunch of compromising information. Anthropic is aware of this, and says it is working on various methods to counter potential attacks of this type. For now, however, the Computer Use function is too rudimentary in its functions to be considered a real risk.

Advertising, your content continues below

More Info

We will be happy to hear your thoughts

Leave a reply

Bonplans French
Logo