The Copilot AI Microsoft Built Into Windows Makes It Incredibly Hackable, Research Shows

Machine Learning in Auto Claims Processing


“When you give AI access to data, that data is now an attack surface for prompt injection.”

Total Snitch

A security researcher has demonstrated that Microsoft’s Copilot AI can easily be manipulated into revealing an organization’s sensitive data, including emails and bank transactions. On top of that, Wired reports, it can also be weaponized into a powerful phishing machine that requires little of the effort usually needed to carry out these kinds of attacks.

“I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf,” Michael Bargury, the cofounder and CTO of security company Zenity, told Wired. “A hacker would spend days crafting the right email to get you to click on it, but they can generate hundreds of these emails in a few minutes.”

Bargury presented these findings at the Black Hat security conference in Las Vegas, joining other accounts of the liabilities posed by AI chatbots, including ChatGPT, that are tapped into datasets containing sensitive information that can be leaked.

Impersonation Machine

Without having access to an organization account, one video shows, Bargury was able to bait the chatbot into changing the recipient of a bank transfer simply by sending a malicious email that the targeted employee doesn’t even have to open.

Another video shows the damage a hacker could do with Copilot if they did have a hacked employee account. Simply by asking the chatbot straightforward questions, Bargury was able to get it to divulge sensitive data that he could use to build a compelling phishing attack that impersonates the employee.

First, Bargury gets the email of a colleague named Jane, learns what the last conversation with Jane was, and gets the chatbot to spill the emails of people CC’d in that conversation.

Bargury then instructs the bot to compose an email written in the style of the hacked employee to send to Jane, and gets the bot to pull the exact subject line of their last email with her.

And in just a matter of minutes, he’s created a convincing email that could deliver a malicious attachment to anyone in the network — all done with Copilot’s eager compliance.

Data Dilemma

Microsoft’s Copilot AI, and specifically its Copilot Studio, allows business organizations to tailor chatbots to their specific needs. To do that, the AI needs access to company data — which is where the vulnerabilities emerge.

For one, many of these chatbots are discoverable online by default, which makes them sitting ducks to hackers who can target them with malicious prompts. “We scanned the internet and found tens of thousands of these bots,” Bargury told The Register.

A particularly clever way of a bad actor can skirt Copilot’s guardrails is through an indirect prompt injection: in a nutshell, you can get a chatbot to do prohibited things by poisoning it with malicious data from an external source, like by asking it to visit a website that contains a prompt.

“There’s a fundamental issue here. When you give AI access to data, that data is now an attack surface for prompt injection,” Bargury told The Register. “It’s kind of funny in a way — if you have a bot that’s useful, then it’s vulnerable. If it’s not vulnerable, it’s not useful.”

More on AI: Google Warns Employees About Using AI, While Promoting Its Own AI



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.