An ML engineer watches as their AI agent, designed to process invoices on a specific corporate portal, suddenly wanders off to a social media site. In another instance, the agent hits a wall the moment it attempts to access an internal service, triggering a cascade of HTTPS certificate errors that freeze the entire workflow. For months, the solution seemed to be better prompting. Developers spent hours refining instructions, pleading with the model to stay within specific domains or ignore certain browser warnings. Yet, the agents continued to ignore these soft boundaries, occasionally attempting to save sensitive credentials into the browser's password manager or failing blindly against internal security proxies. This friction has become a defining challenge for teams moving AI agents from experimental sandboxes into production enterprise environments.
The Architecture of Browser Governance in Bedrock AgentCore
Amazon Bedrock AgentCore, the framework managing the execution environments and tools for AI agents, has introduced native support for Chrome Enterprise policies and custom Root Certificate Authority (CA) integration. This update transforms the browser from a passive tool into a governed environment by granting developers granular control over more than 450 browser settings. These configurations are implemented using the standard Chrome Enterprise JSON format, ensuring that teams already familiar with corporate IT browser management can transition their existing security postures directly to their AI agents. The full scope of these available configurations is detailed in the Chrome Enterprise policy list.
This integration addresses three critical operational gaps. First, it establishes hard boundaries through URL allow-lists and block-lists. Unlike a prompt, which a model might ignore during a complex reasoning chain, a browser-level block is absolute. If an agent is tasked with invoice processing, the infrastructure can ensure it is physically incapable of reaching a search engine or a social media platform, regardless of the model's intent. Second, it mitigates data leakage and security risks by disabling high-risk browser features. Developers can now programmatically turn off the password manager, restrict file downloads, and disable autocomplete functions. This is essential for agents interacting with sensitive internal systems where an accidental data save could lead to a compliance violation. Third, it decouples policy management from agent development. By utilizing a Control Plane API, security teams can define the approved browser configuration independently. This allows the development team to focus on the agent's logic and reasoning without having to hard-code security constraints into the application layer.
From Prompt Engineering to Infrastructure Governance
The fundamental shift here is the migration of the agent's behavioral guardrails from the prompt to the infrastructure. Bedrock AgentCore implements this through a dual-layer policy enforcement system that mirrors the hierarchy of Chrome's own policy engine. The first layer consists of managed policies. These are JSON files stored in Amazon S3 and delivered via the Control Plane API. Once deployed, these policies are mapped to the `/etc/chromium/policies/managed/` directory within the browser environment. Managed policies are immutable at the session level, meaning they cannot be overridden by the agent or the session configuration, providing a mandatory security baseline.
The second layer is the recommended policy. These are provided via the Data Plane API when a browser session is initiated. Mapped to `/etc/chromium/policies/recommended/`, these settings act as default preferences. If a conflict arises between a managed policy and a recommended policy, the managed policy takes precedence, ensuring that corporate security mandates always override session-specific optimizations.
Connectivity to internal corporate networks is further streamlined through the integration of AWS Secrets Manager. Organizations can store their Root CA certificates within Secrets Manager and reference them during the creation of the AgentCore browser or the AgentCore Code Interpreter. The service automatically imports these certificates into the browser's trust store. This eliminates the dangerous practice of disabling SSL verification entirely to bypass certificate errors, allowing agents to connect securely to internal services or navigate through SSL-intercepting proxies that decrypt and inspect traffic for security auditing.
For teams looking to implement this, AWS has provided a sample repository containing a Jupyter notebook. This resource demonstrates the end-to-end provisioning process, including the setup of S3 buckets, AWS IAM execution roles, and the deployment of the AgentCore browser and code interpreter. The notebook specifically utilizes Playwright, a browser automation library, to verify that the policies are functioning correctly by attempting to load both allowed and blocked URLs.
Deploying these agents in a production environment requires a strict adherence to the principle of least privilege. AWS recommends using temporary credentials via AWS IAM Identity Center or AWS STS rather than long-term access keys. Committing long-term keys to source control is strictly prohibited to prevent unauthorized access to the control plane.
AI agent autonomy is no longer a matter of hoping the model follows instructions; it is now a matter of defining the physical boundaries of the environment in which the model operates.




