The current era of generative AI is shifting from a period of passive consultation to one of active execution. For the past year, developers have focused on the prompt, treating the large language model as a sophisticated oracle that provides answers. But the conversation in the developer community has changed this quarter. The focus is no longer on who has the smartest model, but on who provides the most reliable environment for that model to actually operate tools, manage files, and execute code without crashing the system or leaking sensitive data. This is the battle for orchestration, and the latest data suggests a new player has just established a beachhead.
The Data of Displacement
In January, Anthropic held a 0% share of the enterprise agent orchestration market. It was a ghost town, a void where the company provided the intelligence via Claude but left the operational plumbing to others. By February, that number shifted to 5.7%. While a single-digit percentage might seem marginal, it represents a fundamental shift in how enterprises are deploying AI. According to February survey data from VB Pulse, the market remains dominated by the incumbents, but the dynamics are shifting. Microsoft Copilot Studio and Azure AI Studio hold the top spot with 38.6% of the market, an increase from 35.7% in January. OpenAI follows in second place, with its Assistants and Responses API growing from 23.2% to 25.7%.
There is a stark contrast between the adoption of Anthropic's models and the adoption of its orchestration tools. While the orchestration share is 5.7%, the adoption rate for Claude models has skyrocketed. Claude's adoption climbed from 23.9% in January to 28.6% in February, and then surged to 56.2% by March. This means that while a majority of enterprises are using Claude for reasoning, only a small fraction—roughly four out of every 70 surveyed—have entrusted Anthropic with the actual orchestration of their workflows. This gap reveals a critical tension in the enterprise sector. Companies are eager for the intelligence of the model, but they are cautious about the infrastructure used to run it.
This caution is rooted in a growing demand for governance. The data shows that security and permission management are the primary drivers for platform selection, cited by 39.3% of respondents in January and 37.1% in February. Even more telling is the rise in the demand for execution control, which jumped from 17.9% to 22.9%. Enterprises are no longer asking if an agent can perform a task; they are asking who has the power to stop that agent the moment it deviates from the intended path.
The Infrastructure Moat
To understand why a 5.7% share is a threat to the incumbents, one must distinguish between model replacement and infrastructure migration. In the current AI stack, models are treated as interchangeable components. A developer can route a specific workload to Claude for complex reasoning and then switch to an OpenAI model for a different task based on cost or latency. This multi-model strategy is now the industry standard because swapping a model is essentially a configuration change. It is a low-friction operation that prevents vendor lock-in.
Orchestration is entirely different. Moving an agent runtime is not a configuration change; it is a full-scale infrastructure migration. An orchestration layer manages the entire lifecycle of an agent, including tool permissions, authentication credentials, audit logs, persistent memory, and the sandbox environment where code is executed. Once a company builds its operational workflows, security boundaries, and compliance logging into a specific provider's orchestration layer, the cost of switching becomes prohibitively high. The model becomes a commodity, but the runtime becomes the moat.
Anthropic is attempting to build this moat with the public beta of Claude Managed Agents. Rather than just providing an API for text generation, Anthropic is introducing an architecture that separates the model into sessions, harnesses, and sandboxes. This approach allows the agent to maintain long-term context, execute code in a secure environment, and sustain complex workflows over time. By hosting the operational infrastructure directly, Anthropic is moving from being a provider of intelligence to a provider of the operating system for AI agents.
This strategy targets the exact pain point identified in the VB Pulse data. Enterprises are moving away from the DIY approach of assembling a custom agent stack from disparate open-source tools because they lack the necessary control planes. They want a system where permission boundaries are explicit, every action is traceable via an audit trail, and the execution can be terminated instantly. By integrating these controls into the managed agent service, Anthropic is leveraging its model performance to pull enterprises deeper into its ecosystem.
The industry has moved past the phase of searching for the smartest model and has entered the phase of searching for the safest cage.




