Single Click Copilot Attack Stealing Data
In early January 2026, cybersecurity researchers revealed a serious vulnerability in Microsoft Copilot Personal. This consumer-focused AI assistant is integrated into Windows, the Edge browser, and other applications. The vulnerability, called Reprompt, let attackers bypass built-in safety protections.
Attackers could quietly hijack a user's active Copilot session, enabling ongoing theft of sensitive personal data with only a single click. This exploit raised new concerns about the safety of widely deployed AI assistants and highlighted the evolving threat landscape around prompt-based systems.

Varonis Threat Labs, Varonis's security research arm, first uncovered Reprompt in mid-2025. This occurred during an investigation into prompt handling in AI assistants. The attack targets how Copilot processes URL-encoded queries through the "q parameter." This is a standard mechanism that automatically populates the Copilot conversation input field when users click links. By crafting URLs with malicious instructions in this parameter, attackers could make Copilot execute arbitrary commands on behalf of the user.
In its basic form, the vulnerability does not rely on malware or browser plugins. It also does not use social engineering, apart from convincing a user to click an innocuous link. After the link was clicked, Copilot would process the instructions embedded in the URL and begin responding. This would happen even if the Copilot chat window was later closed.
Reprompt is especially dangerous because of its stealth. Unlike typical phishing techniques that require many user actions or show prompts, this exploit worked quietly and invisibly after just one click. It could access sensitive personal information, such as recent document activity, chat histories, and location. Copilot could reason about or had learned this data in previous interactions.
Microsoft confirmed the vulnerability and released a patch as part of its January 2026 Patch Tuesday updates. This patch closed Reprompt against current attack techniques. Still, the incident underscores risks in AI assistants tied to personal accounts and data access.
The Reprompt attack flow used a three-stage methodology to subvert Copilot's internal protections while maintaining continuous control over the assistant's responses.
The stages are briefly examined below:
- Parameter-to-Prompt (P2P) Injection. The attack begins by exploiting the Copilot URL structure, specifically the q parameter, which is intended to pass user queries via a link. Researchers found that if an attacker controlled that parameter, Copilot would treat its contents as input and execute the embedded instructions automatically when the link was opened.
- Double-Request Technique. Copilot's internal safeguards against data exfiltration seem to apply mainly to the first request in a sequence. Reprompt exploited this by instructing Copilot to perform the same action twice within the same flow. While Copilot's protections might block a direct attempt to fetch sensitive data the first time, the second attempt could work. Protections did not persist across multiple requests. In demonstrations, researchers extracted a secret code string hidden in a URL by having Copilot repeat the command. This shows how Reprompt circumvented its guardrails.
- Chain-Request Technique. After the initial compromise, attackers maintained an ongoing exchange between Copilot and an attacker-controlled server. Every Copilot response could inform the next instruction from that server, enabling a persistent data-exfiltration chain. The crucial point: the real attack logic was not in the initial malicious URL. It was hidden in follow-up requests. In a proof-of-concept, researchers demonstrated that this method could steal personal details, such as user file summaries and location data. Since only the first URL carried any visible malicious content, standard security tools are unlikely to catch the entire data exfiltration chain by inspecting the initial phishing link alone.
Why Copilot Personal Was at Risk
Varonis clarified that Reprompt affected Copilot Personal, Microsoft's consumer edition AI assistant. Copilot Personal was the only version at risk. It did not apply to Microsoft 365 Copilot, which targets enterprise customers. Microsoft 365 Copilot uses advanced security features, like Purview auditing, tenant-level DLP, and admin-enforced restrictions. Enterprise versions offer more protection than Copilot Personal, which explains why the vulnerability affected only individual user accounts.
The core issue comes from how Copilot's interface accepts pre-filled prompts via URL parameters. It treats external input as trusted by default. This trust model works under normal circumstances. However, it becomes a liability when attackers embed malicious intent in structured data that Copilot cannot distinguish from user queries. Embedded prompts are processed just like typed inputs, allowing them to execute fully within the victim's authenticated session.
The discovery of Reprompt shows deeper systemic challenges in securing AI assistants. Unlike traditional applications, AI tools interpret natural language and execute complex sequences of reasoning and data access. They do this based on that interpretation. This flexibility allows attackers to craft deceptive instructions that blend technical commands with conversational prompts, blurring the line between legitimate use and exploitation.
Prompt injection attacks like Reprompt and previously documented exploits, such as EchoLeak, demonstrate that prompt handling itself can be a vector for serious security breaches. In such cases, the vulnerability does not necessarily require software bugs in the traditional sense; rather, it exploits the logical processing of untrusted input.
Defenses must go beyond simple heuristics and use contextual understanding of prompt origins, intent validation, and strong execution sandboxing. Security researchers and vendors agree: prompt injection is a growing risk as AI spreads. Effective mitigation may need new architectures for AI assistants, with strict input validation, provenance tracking, and persistent safety controls. Controls should apply across the entire interaction chain, not just at the beginning.
Microsoft's patch fixed Reprompt as of January 2026. However, the mechanisms could resurface in new forms across other AI platforms if ignored. This case warns organizations and users: while AI assistants increase productivity, they also create attack surfaces that current security models cannot yet handle.
Share:
Karolis Liucveikis
Experienced software engineer, passionate about behavioral analysis of malicious apps
Author and general operator of PCrisk's News and Removal Guides section. Co-researcher working alongside Tomas to discover the latest threats and global trends in the cyber security world. Karolis has experience of over 8 years working in this branch. He attended Kaunas University of Technology and graduated with a degree in Software Development in 2017. Extremely passionate about technical aspects and behavior of various malicious applications.
PCrisk security portal is brought by a company RCS LT.
Joined forces of security researchers help educate computer users about the latest online security threats. More information about the company RCS LT.
Our malware removal guides are free. However, if you want to support us you can send us a donation.
DonatePCrisk security portal is brought by a company RCS LT.
Joined forces of security researchers help educate computer users about the latest online security threats. More information about the company RCS LT.
Our malware removal guides are free. However, if you want to support us you can send us a donation.
Donate
▼ Show Discussion