Anthropic announced general availability of Claude Platform on AWS on May 11, 2026 — the native Claude API, including Managed Agents, skills, code execution, web search and fetch, batch processing, and the Console, all billed through an AWS account and authenticated through AWS IAM. It is not Bedrock. It is the direct Claude API with AWS plumbing wrapped around it.
That announcement turned a two-option decision into a three-option one. Until this week, enterprises running Claude on AWS were choosing between Bedrock (AWS-managed inference with a feature lag) and the direct Anthropic API (full feature parity, separate billing relationship). Claude Platform on AWS now sits in the middle: native Claude features with AWS billing and identity. The right answer for any given workload is no longer obvious.
What is Claude Platform on AWS?
Claude Platform on AWS is the native Anthropic Claude API made available through AWS billing and AWS IAM authentication. It ships every feature of the direct API — Managed Agents, skills, code execution, web search, batch, Console — with day-one parity to new releases. It is not Amazon Bedrock, which is AWS-managed inference with a delayed feature surface.
The Three Substrates, Side By Side
The cleanest way to think about the three options is to separate where the model runs from who controls the feature surface and where the bill lands. Each substrate makes a different trade across those three axes.
| Substrate | Feature Surface | Billing | Auth | Data Residency |
|---|---|---|---|---|
| Amazon Bedrock | AWS-curated subset, delayed | AWS account | AWS IAM | AWS region you pick |
| Claude Platform on AWS | Full native API, day-one parity | AWS account | AWS IAM | AWS commercial regions (global or US inference) |
| Direct Anthropic API | Full native API, day-one parity | Anthropic invoice | Anthropic API keys | Anthropic-managed |
The interesting cell is the Bedrock feature row. Bedrock is excellent infrastructure, but Anthropic ships features to its own API surface first — and features like Managed Agents, skills, and the new Console tooling either land on Bedrock months later or never get the same fidelity. That gap is what Claude Platform on AWS exists to close.
When To Choose Amazon Bedrock
When should an enterprise still pick Amazon Bedrock?
Pick Bedrock when the workload only needs base model inference, when VPC endpoints and PrivateLink are non-negotiable, when the team already runs multi-model pipelines that include non-Anthropic models inside Bedrock, or when AWS-managed guardrails are part of the security review. Bedrock is the right call for most inference-only enterprise workloads in 2026.
Bedrock is the substrate of choice when the workload is a classic inference call — system prompt, user message, response — and the enterprise has already standardized on Bedrock for other models like Llama or Mistral. Switching that workload off Bedrock to chase native Claude features is usually a bad trade.
Bedrock is also the answer when private networking matters. Bedrock supports VPC endpoints, PrivateLink, and IAM policy controls that fit cleanly into existing AWS security review processes. Claude Platform on AWS uses AWS auth and billing, but the feature surface is the native Claude API, which means the private-networking story is closer to the direct Anthropic API than to Bedrock.
The case studies we published this week make the Bedrock case concretely. The Sentinel AI ops layer runs on the us.anthropic.claude-sonnet-4-6 Bedrock inference profile because the workload is pure inference inside an AWS-native operations pipeline. There is no reason to pay the migration cost to leave Bedrock when the feature surface is not the constraint.
When To Choose Claude Platform on AWS
When should an enterprise choose Claude Platform on AWS over Bedrock?
Choose Claude Platform on AWS when the workload needs Managed Agents, skills, code execution, web fetch, or any native Claude API feature Bedrock does not yet ship, and when procurement requires AWS billing or AWS IAM as a hard constraint. It is the right substrate when feature parity matters but a direct Anthropic vendor relationship is blocked.
Claude Platform on AWS is built for the enterprise that wants the full Claude API surface but cannot or will not open a second vendor relationship outside AWS. Procurement teams that have spent years standardizing on AWS billing, AWS support contracts, and AWS-issued IAM credentials get to keep that posture while accessing every Claude capability the day it ships.
The practical wedge is Managed Agents, skills, and code execution. These features turn Claude from an inference endpoint into an agent platform — and they are where enterprise teams now spend most of their engineering effort. A workload that needs an agent loop with tool use, code execution sandboxes, and structured skills cannot run on Bedrock today. It has to be on the native Claude API. Claude Platform on AWS is what makes that legal under AWS-only procurement rules.
This substrate is also the cleaner answer for any enterprise running the kinds of agent-driven workloads documented in our agent orchestration and AI agent operations guides. Those patterns assume native Claude features.
When To Choose the Direct Anthropic API
When does the direct Anthropic API still win?
The direct Anthropic API wins when the workload is multi-cloud or runs outside AWS, when the team needs the closest possible relationship with Anthropic support and roadmap, or when the buyer is a small or mid-sized engineering team that does not benefit from AWS consolidated billing. It also wins for prototyping where simplicity matters more than procurement alignment.
The direct Anthropic API is the simplest substrate. One API key, one invoice, one set of docs. For a development team prototyping an agent system, the friction of routing through AWS IAM and AWS billing is overhead with no offsetting benefit. Get the API key, build, iterate.
It is also the right answer when the workload is multi-cloud. An agent platform that has to run inference across AWS, Azure, and Google Cloud is not well served by tying its Claude calls to AWS-specific authentication. The direct Anthropic API treats every environment the same.
And it is the right answer when the relationship with Anthropic itself matters. Direct customers get the closest signal on roadmap, the most direct support escalation paths, and the cleanest path to features like the Consulting Partner Network for partners who want to resell or co-deliver Claude-based systems.
Feature Parity: What Bedrock Does Not Yet Ship
What native Claude features does Amazon Bedrock not yet ship?
As of May 2026, Amazon Bedrock does not yet ship Claude Managed Agents, the full skills system, native code execution, native web search and web fetch, or the Anthropic Console with prompt improvement and evaluations. Bedrock supports base inference on Claude Opus 4.7, Sonnet 4.6, and Haiku 4.5, but the agent-platform surface lags by months.
The feature gap is not academic. Skills define reusable agent behaviors with their own scoping and version control. Managed Agents handle the orchestration loop, tool invocation, and state management that teams used to build by hand. Code execution sandboxes run model-generated code in a controlled environment without the team needing to operate that sandbox themselves.
These are not optional capabilities for any enterprise building agent systems in 2026. They are the substrate of the substrate. A team running Bedrock-only is rebuilding those layers in-house, which is fine for a few quarters but becomes a maintenance liability fast.
Pricing and Billing Practicalities
All three substrates charge by token volume on the same Claude model family, with each pricing surface set by its respective platform. Bedrock and Claude Platform on AWS land on the AWS invoice. The direct Anthropic API lands on a separate Anthropic invoice. For enterprises with AWS Enterprise Discount Programs in place, the AWS-billed substrates may apply against committed spend depending on the contract.
The hidden cost is not the per-token rate. It is the engineering time spent compensating for Bedrock's feature gap when the workload actually needs Managed Agents or skills. A team rebuilding agent orchestration on top of Bedrock inference is spending six-figure engineering quarters on infrastructure that Anthropic now ships as a managed feature on the other two substrates.
Migration Considerations
Moving a workload between substrates is straightforward at the API call layer — the request and response shapes are aligned across all three. The complexity is in the surrounding plumbing: IAM and key management, request signing, regional routing, observability hooks, and any wrappers the team has built for retry and backoff.
The cleanest migration paths are:
- Bedrock → Claude Platform on AWS when the workload needs Managed Agents or skills and the team wants to keep AWS billing.
- Direct Anthropic API → Claude Platform on AWS when procurement forces AWS-only billing for a workload that started on the direct API.
- Bedrock → Direct Anthropic API when the workload is multi-cloud or the team wants the closest Anthropic relationship.
The migration that almost never makes sense is Claude Platform on AWS back to Bedrock. The feature surface only goes one direction in 2026.
How This Maps To the iSimplifyMe Substrate Practice
Our infrastructure work runs across all three substrates depending on what the client workload actually needs. The Sentinel AI ops layer uses Bedrock because the workload is pure inference. The Apex ticket automation uses Bedrock Anthropic Claude Haiku for the same reason.
Agent-platform work for clients moving toward the patterns in our citation authority engineering framework increasingly lands on Claude Platform on AWS or the direct Anthropic API because the workloads need skills, Managed Agents, and the rest of the native feature surface.
The wrong move is to default the whole enterprise to one substrate. The right move is to pick per workload, and the May 2026 announcement makes that easier — Claude Platform on AWS now covers the workloads that used to force a procurement carve-out for the direct Anthropic API.
If you are running a substrate audit, the free AEO scanner is the fastest way to assess which of your existing pages already document your Claude infrastructure for AI retrieval systems. For deeper substrate strategy work, our AEO infrastructure practice includes substrate selection inside a broader AI infrastructure engagement.
Frequently Asked Questions
Is Claude Platform on AWS the same thing as Amazon Bedrock?
No. Claude Platform on AWS is the native Anthropic Claude API made available through AWS billing and AWS IAM authentication. Amazon Bedrock is AWS-managed inference that hosts multiple model families, including Claude, but lags the native Claude API on agent-platform features like Managed Agents, skills, and code execution. The two are separate products with different feature surfaces.
Which Claude models are available on Claude Platform on AWS?
Claude Platform on AWS supports Claude Opus 4.7, Claude Sonnet 4.6, and Claude Haiku 4.5 at general availability, with feature parity to the direct Anthropic API on day one of any new model release.
Does Claude Platform on AWS work with AWS PrivateLink and VPC endpoints?
Claude Platform on AWS is the native Anthropic API surface delivered through AWS authentication and billing. The private networking story is closer to the direct Anthropic API than to Bedrock. Enterprises with strict VPC-endpoint or PrivateLink requirements should validate the current networking surface with their AWS account team before committing.
Can a workload mix substrates?
Yes. Most mature enterprise AI deployments in 2026 run Bedrock for pure inference workloads and Claude Platform on AWS or the direct Anthropic API for agent-platform workloads. The model family and request shape are aligned across substrates, so per-workload substrate selection is the recommended architecture.
Will Bedrock eventually ship Managed Agents and skills?
AWS has not committed to a public timeline for Bedrock parity with the native Claude API on agent-platform features. The launch of Claude Platform on AWS suggests that Anthropic and AWS expect the feature surface gap between Bedrock and the native API to persist for at least the medium term, with Claude Platform on AWS positioned as the AWS-billed path to native feature parity.