A developer finds a new AI-powered browser extension that promises to streamline their workflow. They click install, sign in with their corporate Google Workspace account, and instinctively hit the Allow button on a broad set of OAuth permission requests. In the moment, it feels like a minor trade-off for a productivity boost. In reality, that single click creates a high-speed corridor directly into the heart of the company's production environment.
The Anatomy of a Multi-Stage Breach
Vercel recently confirmed that unauthorized actors gained access to its internal systems, triggering a comprehensive investigation involving Mandiant and various law enforcement agencies. To ensure the integrity of its ecosystem, Vercel coordinated with GitHub, Microsoft, npm, and the supply-chain security platform Socket. This joint effort confirmed that the company's primary published packages, including the Next.js framework, Turbopack build tool, and the AI SDK, remained untampered and secure. However, the path the attackers took to reach the internal environment reveals a cascading failure of trust across the AI toolchain.
The breach originated not at Vercel, but at Context.ai, an AI analytics provider. According to analysis from OX Security, a Vercel employee had installed a Context.ai browser extension and granted it extensive OAuth permissions via their Google Workspace account. This established a persistent link between the two organizations. The vulnerability was triggered in February 2026, when an employee at Context.ai downloaded a Roblox auto-farming script and a game exploit executor. This download delivered Lumma Stealer, a potent information-stealing malware, onto the Context.ai employee's machine.
The attackers used Lumma Stealer to harvest a treasure trove of credentials, including Google Workspace login details, Supabase keys, Datadog tokens, and Authkit credentials. With these keys, the attackers pivoted into Context.ai's AWS environment. Once inside, they targeted the OAuth tokens for the AI Office Suite, a consumer-facing product. One of these stolen tokens provided the exact key needed to enter the Google Workspace of the Vercel employee. By masquerading as a legitimate user, the attackers accessed Vercel's operational dashboards and APIs. They specifically targeted environment variables that had been marked as non-sensitive, reading them in plain text to steal customer credentials and escalate their privileges within the system.
The Trust Gap in the AI Productivity Stack
This incident exposes a critical blind spot in the modern security stack. For years, enterprise defense has focused on the perimeter and the endpoint. Endpoint Detection and Response (EDR) tools are designed to kill malware on a laptop, while Cloud Access Security Brokers (CASB) monitor for anomalous cloud behavior. Yet, neither of these layers is equipped to stop an attacker who is using a perfectly valid, user-authorized OAuth token. The attackers did not need to exploit a software vulnerability or brute-force a password; they simply walked through a door that a Vercel employee had voluntarily unlocked for a third-party AI tool.
Most organizations treat OAuth approvals as a one-time administrative hurdle rather than a dynamic security risk. Because these tokens are designed to bypass the need for repeated logins, they often operate outside the visibility of standard approval workflows. When a third-party vendor like Context.ai is compromised, every company that granted that vendor broad permissions becomes a secondary target. The attack surface has shifted from the server to the permission set, turning productivity tools into dormant liabilities.
The severity of this shift is highlighted by the staggering dwell time of the intrusion. While Context.ai detected the AWS breach in March, Vercel only disclosed the incident a month later. More alarmingly, Trend Micro suggests the initial breach may have occurred as early as June 2024. If this timeline holds, the attackers operated as invisible ghosts within Vercel's internal network for approximately 22 months. This gap exists because the attackers were not triggering alarms; they were using legitimate credentials to perform actions that looked like standard administrative work.
In response to the breach, Vercel has implemented a strategic shift in how it handles configuration. The company now sets the default status of all new environment variables to sensitive. When a variable is marked as sensitive, it is stored in a way that makes it unreadable to those with basic API or dashboard access, effectively neutralizing the primary method the attackers used to escalate their privileges. This move acknowledges that technical patches are insufficient if the underlying governance of trust is broken.
The new corporate perimeter is no longer a firewall, but the permissions list of every AI extension installed by an employee.




