Every morning, the same frustration hits: your mobile app can handle voice ordering, but the web experience stalls when users switch menus, and the conversation sometimes loses context on the next turn. This week, a new GitHub deployment reference shows a different approach—one that routes voice orders through the same workflow across mobile, web, and voice interfaces.

Section 1

The reference architecture is built on Amazon Bedrock AgentCore, with a deployment and operations setup designed to handle an end-to-end ordering flow. For the voice layer, it uses Amazon Nova 2 Sonic, described as a speech-to-speech foundation model available in Amazon Bedrock. The architecture is configured to deploy infrastructure that covers authentication, order handling, and location-based recommendations.

A key design choice is separation of concerns. The reference splits the system into a frontend, an AI agent layer, and a backend layer so each part can be developed and scaled independently. Instead of bolting voice onto a separate pipeline, it aims to keep the ordering workflow consistent across touchpoints.

The document also calls out why this matters: earlier patterns often attached voice interfaces as a separate pipeline, which made it expensive to keep mobile/web order state synchronized. In contrast, the new setup changes the integration model so the AgentCore Gateway exposes backend endpoints as “tools” that the agent can call.

To avoid tight coupling between the agent and a specific backend framework, the reference places MCP (an open standard for connecting external data sources, tools, and workflows) as a standard communication layer between the agent and the backend. That means the agent can interact with backend capabilities through a consistent interface rather than bespoke glue code.

The practical outcome developers are meant to feel is straightforward: voice orders flow through the same order workflow across multiple touchpoints, and the system maintains order context while the conversation continues across turns.

Section 2

So what is actually different here, beyond “it supports voice”? The change is architectural: the reference treats voice ordering as an orchestration problem, not a UI feature.

In the older approach, voice often lived in its own pipeline, and the system had to reconcile state between channels—mobile, web, and voice—after the fact. That reconciliation is where costs and edge cases pile up, especially when a user changes menus or when the next conversational turn depends on what happened earlier.

In the AgentCore-based reference, the agent does not “talk to a separate voice backend.” Instead, AgentCore Gateway integrates backend endpoints into the agent’s tool surface. That single integration layer becomes the bridge that keeps the workflow coherent. When the agent calls a tool, it is calling into the same backend capabilities that the rest of the ordering system uses.

The second difference is the explicit modular boundary. The reference breaks the solution into reusable components: frontend (AWS Amplify), agent gateway (AgentCore Gateway), runtime (AgentCore Runtime), and backend services (REST API, Lambda, DynamoDB, and location services). Because these modules are separated, the system can preserve conversational context while still letting teams iterate on UI and backend independently.

Finally, MCP is used as the standard communication layer between the agent and backend. That reduces the risk that the agent becomes locked to one integration style. In other words, the “omnichannel voice ordering” claim is backed by a tool-first gateway integration plus a standard interface layer, which together make it possible to keep the workflow consistent without rewriting everything for each channel.

Section 3

The reference describes the deployment as four sections, labeled Section A through Section D, each responsible for a distinct part of the infrastructure.

Section A covers the backend infrastructure. It provisions a restaurant sample architecture using infrastructure-as-code. It creates data stores for customers, orders, menus, carts, and locations. It also sets up a location-based service for address handling and mapping. Business logic is implemented with Lambda functions, and the backend includes an API layer for external access plus authentication and authorization services. Resources are deployed in dependency order.

Section B is the AgentCore Gateway. It provisions the required IAM service permissions, creates the AgentCore Gateway service, and configures API integration so backend endpoints are exposed as tools the agent can access.

Section C defines the AgentCore Runtime and its container setup. It provisions an Amazon ECR repository as the container store, provisions Amazon S3 for source upload, uses AWS CodeBuild for build automation, and includes the necessary IAM permissions. The AgentCore Runtime service is configured to use the WebSocket protocol.

Section D is the frontend deployment using AWS Amplify. It provisions Amplify hosting along with deployment configuration, and it generates frontend configuration from backend outputs. After completion, the web application becomes reachable via the Amplify URL.

Section 4

The reference also includes a deployment flow that starts with cloning the GitHub repository and running a deployment script. The script requires two parameters, and it sends a temporary password to an email address that is used for an initial Cognito test user.

Before deployment, the script runs preflight checks. It verifies that Node.js, Python, AWS CLI, CDK, credentials, CDK bootstrap, and access to the Bedrock Nova 2 Sonic model are ready. If checks fail, it reports missing items and suggests automatic installation where possible.

Once preflight checks pass, the script executes five steps. Steps 1 through 3 are fully automated. Step 4, labeled Synthetic Data, becomes the center of customization. It asks the user to choose one of the following as the central location: a city, a postal code, or an address. It also asks which food categories to generate data for, with examples including pizza, burgers, coffee shop, sandwich, and tacos. The script asks whether to reuse the customer’s home address for generated data, and it prompts for confirmation before writing the generated data into DynamoDB.

Step 5, Password Setup, asks whether to optionally change the temporary Cognito password that arrives by email. If the user answers “yes,” the script asks for the temporary email password and then sets a new permanent password that must comply with Cognito password policy requirements, including 8+ characters and inclusion of uppercase, lowercase, numbers, and symbols.

When the deployment finishes, it outputs the frontend URL, for example:

https://main.<app-id>.amplifyapp.com

From there, users can access the application.

Section 5

The reference explains how API Gateway, DynamoDB, and location services support the ordering workflow.

API Gateway creates a REST API that connects the frontend to backend services. It provides IAM-authenticated access to eight endpoints and integrates them with Lambda.

The backend supports the full ordering workflow using five DynamoDB tables.

Customers stores profile data such as name, email, phone, loyalty tier, and points, which the system uses for personalization and recommendations.

Orders stores order history and includes location data. To support location-based queries, it uses a Global Secondary Index to identify popular items by location.

Menu stores menu items whose prices and availability can vary by location, capturing the location-specific nature of ordering.

Carts stores temporary carts and uses a 24-hour TTL to automatically clean up abandoned sessions.

Locations stores restaurant data including coordinates, business hours, and tax rates, which the system uses for order calculation and recommendations.

DynamoDB on-demand capacity is enabled so the tables scale automatically based on traffic.

Location Services provides location-based functionality so customers can find pickup locations. The deployed resources include Place Index (Esri) for geocoding and address search, and Route Calculator (Esri) for route computation.

The reference ultimately “fixes” the integration points needed to move voice ordering into an omnichannel experience by encoding the connections in code, so the workflow stays consistent as users move between voice, mobile, and web.