Security researchers at LayerX have exposed a large-scale campaign that weaponises the AI boom. The operation, named AiFrame, spread dozens of fraudulent browser extensions disguised as helpful assistants.
More than 260,000 Google Chrome users were affected. Once installed, the tools quietly converted browsers into remote surveillance points controlled by attacker infrastructure.
What makes AiFrame different is architecture. Instead of embedding obvious malware, the extensions pulled their interface from remote servers. To victims, the window looked like a normal AI sidebar.
Because the logic lived outside the store, criminals could switch behaviour instantly. A benign translator could become a credential thief overnight without changing the listed software.
The plug-ins effectively acted as privileged bridges, passing information between sensitive browser functions and external command systems, sidestepping many built-in safeguards.
Email became a prime target. Several variants watched sessions on Gmail, scraping threads, drafts and replies directly from page content.
Developers faced another risk. When API keys for services such as OpenAI or Anthropic were pasted into the fake interface, the data could be intercepted before reaching legitimate platforms.
Some editions even tapped speech features, opening the possibility of capturing voice input or nearby conversations, widening exposure beyond typed data.
Alarmingly, a few extensions carried “Featured” labels in the marketplace. Investigators believe operators behaved cleanly during review, then activated hidden remote capabilities later.
Defence now requires stricter discipline. Enterprises should treat extensions as non-human identities, restrict permissions, move toward allow-listing, monitor unusual page manipulation and block known malicious domains.
AiFrame signals a pivot from password theft to context theft. Remove suspicious AI add-ons, clear browsing data and rotate credentials quickly. In the AI era, the browser itself has become the frontline.