Elevata

Article

OpenAI Codex on Amazon Bedrock: What Works Today, AWS Setup & Guardrails

Paulo Frugis
View profileApril 23, 20269 min read

The OpenAI and Amazon partnership makes Bedrock a serious path for enterprise agents, but it does not mean every OpenAI runtime or Codex-class model is generally available in Bedrock today. The important move for platform teams is to separate what works now from what is announced, private-preview-gated, or still needs to be validated in your AWS account.

Last verified: April 23, 2026. OpenAI says the Stateful Runtime Environment for agents in Amazon Bedrock will be available soon. AWS documents OpenAI GPT OSS models in Bedrock, the Bedrock Mantle endpoint for OpenAI-compatible APIs, and Bedrock Runtime for AWS-native APIs. Codex CLI 0.123.0 added a built-in amazon-bedrock provider with configurable AWS profile support.

This guide shows how to prepare AWS safely: choose the account strategy, validate Bedrock access, understand the correct model IDs, configure IAM Identity Center, point Codex at Bedrock, add cost alerts, and keep the foundation ready for GPT-5.4, Codex models, Frontier, or Stateful Runtime when access appears in your account.

What works today and what is not yet generally available

CapabilityStatus on Apr 23, 2026What to do now
OpenAI GPT OSS on BedrockAvailable as Bedrock models, including openai.gpt-oss-120b-1:0 on Runtime and openai.gpt-oss-120b on Mantle.Use it to validate authentication, endpoints, IAM, cost controls, and local Codex operation.
Codex CLI amazon-bedrock providerAvailable in Codex CLI 0.123.0, with configurable AWS profile support.Upgrade Codex before testing and isolate the AWS profile for the pilot.
Bedrock MantleDocumented by AWS for Responses API, Chat Completions, and OpenAI-compatible workflows.Use it when the client, SDK, or tool expects an OpenAI-compatible format.
Stateful Runtime Environment in BedrockAnnounced by OpenAI as available soon, not a public resource listed in every account.Prepare identity, networking, audit, and cost controls now, but do not hard-code a future runtime ID.
GPT-5.4, GPT-5.3-Codex, or Frontier in BedrockDo not assume public availability as a provisionable Bedrock model.Validate with list-foundation-models and official documentation before promising production use.

Should your team start now or wait?

SituationRecommendationWhy
You only want to use the future Stateful Runtime or Frontier directlyWait for AWS/OpenAI access, but prepare the foundation.The runtime still needs to appear in your account, but IAM, SSO, SCPs, cost controls, and audit do not depend on the final runtime.
You want to validate Codex through Bedrock with OpenAI models available todayStart now in an isolated account or sandbox OU.GPT OSS 120B lets you test credentials, endpoints, permissions, observability, and spend limits.
You support US/Canadian customers or have residency, audit, or private networking requirementsRun a readiness review before the pilot.The risk is less about the config file and more about identity, regions, logs, traffic path, data residency, and usage containment.
You plan to release this to multiple developersAdd guardrails before rollout.Coding agents can create expensive loops, repeated calls, and usage that is hard to attribute after the fact.

AWS account strategy for Codex and agent pilots

Do not start in production just because it is convenient. The AWS account boundary is one of the simplest ways to control billing, permissions, CloudTrail, Service Control Policies, and blast radius.

ModelWhen to use itMinimum controls
New AWS accountStartup, technical lab, or validation without an existing AWS environment.Corporate email or secure list for root, MFA, alternate contacts, IAM Identity Center, and a monthly budget before the first test.
AWS Organizations member accountCompany with Organizations, Control Tower, or a landing zone.Sandbox or AI platform OU, regional/model SCPs, centralized CloudTrail, account budget, and separate permission sets.
AI platform accountTeams operating Bedrock for multiple applications.Projects by application, cost tags, clear owners, IAM review, and a promotion path to production.
Existing production accountOnly after the path has been validated.Change management, endpoint policy, CloudTrail, service budget, rollback tests, and compliance review.

Region and data residency for US and Canadian teams

For US and Canadian teams, treat us-east-1, us-east-2, and us-west-2 as the primary validation path for OpenAI GPT OSS on Bedrock today. Those regions appear in AWS model documentation for openai.gpt-oss-120b-1:0 and are the most practical path for testing the current Codex provider, IAM permissions, the Mantle endpoint, Budgets, CloudTrail, and PrivateLink.

For Canadian organizations, the main question is not only latency. It is whether the pilot may invoke models in a US region while usage remains bounded by account, IAM, logs, and data policy. If Canadian data residency is mandatory, document that restriction and keep sensitive data out of the pilot until the required model, endpoint, quota, and network path are available and approved in the required Canadian region. For Brazil or LATAM deployments, the same logic applies to sa-east-1: validate model availability, endpoint support, quota, PrivateLink, and LGPD requirements before moving sensitive data into the pilot.

Bedrock Mantle vs. Bedrock Runtime: endpoint and model ID guide

The most common implementation error is mixing the Runtime model ID with the Mantle model ID. AWS documents both paths for GPT OSS 120B:

PathBest fitEndpointModel ID
Bedrock MantleCodex, Responses API, Chat Completions, and OpenAI-compatible clients.https://bedrock-mantle.{region}.api.aws/v1openai.gpt-oss-120b
Bedrock RuntimeAWS SDK, InvokeModel, InvokeModelWithResponseStream, and Converse.https://bedrock-runtime.{region}.amazonaws.comopenai.gpt-oss-120b-1:0
Future Stateful Runtime or FrontierProduction agents with state, tools, approvals, identity, and governance.Verify when access opens.Verify when it appears in your account.

IAM, SSO, and least-privilege access

Start with IAM Identity Center. Each person should use temporary credentials refreshed through SSO, not permanent access keys on laptops. Create one administrative permission set for bootstrap and another restricted permission set for Codex/Bedrock users.

aws configure sso --profile codex-bedrock
aws sso login --profile codex-bedrock
aws sts get-caller-identity --profile codex-bedrock

For local validation through Runtime, a restrictive starting point is to allow global model discovery and invocation only for approved models:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "bedrock:ListFoundationModels",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": [
        "arn:aws:bedrock:*::foundation-model/openai.gpt-oss-120b-1:0",
        "arn:aws:bedrock:*::foundation-model/openai.gpt-oss-20b-1:0"
      ]
    }
  ]
}

If your organization uses SCPs, explicitly validate that Bedrock is not blocked in the selected regions. Use explicit denies for regions and models outside policy, but avoid a policy so broad that every future model is automatically usable.

Configure Codex with the built-in amazon-bedrock provider

Upgrade Codex to a version that includes the Bedrock provider. The Codex changelog records the amazon-bedrock provider in CLI 0.123.0.

npm install -g @openai/codex@0.123.0
codex --version

In ~/.codex/config.toml, use the Mantle model ID, not the Runtime model ID:

model = "openai.gpt-oss-120b"
model_provider = "amazon-bedrock"

[model_providers.amazon-bedrock.aws]
profile = "codex-bedrock"

The built-in provider defines the Bedrock Mantle endpoint, the responses API, and AWS SigV4 authentication. In the first provider version, Codex uses the default Mantle endpoint in us-east-1 and lets you configure the AWS profile. Treat region, model, and runtime as isolated configuration because those points can change when AWS opens Stateful Runtime, Frontier, or Codex models to more accounts.

Validate before developer rollout

Do not call the pilot ready until you pass a simple end-to-end check:

aws bedrock list-foundation-models \
  --profile codex-bedrock \
  --region us-east-1 \
  --query "modelSummaries[?providerName=='OpenAI'].[modelId,modelName,modelLifecycle.status]" \
  --output table
  • Confirm the model appears in the selected region.
  • Confirm the SSO profile expires and refreshes correctly.
  • Run a short Codex prompt against a small repository before pointing it at monorepos.
  • Check CloudTrail, budgets, and alarms after the test.
  • Document Codex version, AWS CLI version, region, endpoint, model, and validation date.

Cost controls, Projects, and usage attribution

Install cost controls before the first pilot. In AWS Budgets, create a monthly account budget with alerts at 50%, 80%, and 100% actual spend, plus 100% forecasted spend. Send alerts to a FinOps or platform list, not to a single person.

ControlWhen to use itNote
Account budgetEvery pilot.Protects against loops and unexpected usage in an isolated account.
Amazon Bedrock service-filtered budgetEvery Bedrock pilot.Separates AI consumption from the rest of AWS spend.
Cost Anomaly DetectionTeams with recurring usage.Flags unusual patterns before month-end.
Projects APIApplications using OpenAI-compatible APIs on Bedrock Mantle.AWS positions Projects as application-level isolation with tags, cost tracking, and observability.
Separate accounts or tagsMulti-team environments.Prevents experiments, development, and production from blending together.

Security, audit, and private networking

Pointing Codex at Bedrock is the easy part. The security review needs to cover who can invoke models, which regions are allowed, what data enters prompts, how logs are retained, how spend is limited, and which network path is used.

  • CloudTrail: enable centralized trails and review Bedrock events from day one.
  • SCPs: block regions or models outside company policy, with controlled exceptions for the pilot account.
  • PrivateLink: AWS documents interface endpoints for bedrock, bedrock-runtime, and bedrock-mantle, including private DNS for bedrock-mantle.{region}.api.aws.
  • Endpoint policies: apply endpoint policies when traffic must stay restricted to approved actions and resources.
  • Sensitive data: start with low-risk repositories and tasks before allowing personal, customer, or production data.

Troubleshooting common setup failures

SymptomLikely causeFix
model not foundRuntime ID used on Mantle, or Mantle ID used on Runtime.Use openai.gpt-oss-120b in Codex/Mantle and openai.gpt-oss-120b-1:0 in Runtime.
AccessDeniedExceptionPermission set, SCP, or endpoint policy blocking Bedrock.Review IAM, SCPs, region, and model ARN.
Codex ignores BedrockOld Codex version or wrong model_provider.Upgrade to Codex CLI 0.123.0 or later and confirm model_provider = "amazon-bedrock".
SSO works in the terminal but fails in CodexDifferent profile or expired session.Run aws sso login --profile codex-bedrock and confirm the same profile in config.toml.
Works in us-east-1, fails elsewhereModel, endpoint, quota, or PrivateLink path not validated in that region.Run list-foundation-models in the target region and verify quotas and endpoints.
Cost rises with no clear ownerNo tags, Projects, isolated account, or service budget.Add a Bedrock budget, Cost Anomaly Detection, and attribution by account, tag, or Project.

Readiness checklist

  • New AWS account or member account created for the pilot.
  • Root MFA, alternate contacts, and IAM Identity Center configured.
  • Region decision documented for us-east-1, us-east-2, us-west-2, and Canadian residency requirements.
  • OpenAI models listed with list-foundation-models.
  • Correct Mantle and Runtime model IDs documented.
  • Codex CLI 0.123.0 or later installed.
  • Local SSO profile working with temporary credentials.
  • Account budget, Bedrock budget, and shared alerts enabled.
  • CloudTrail, SCPs, PrivateLink, and endpoint policies reviewed for pilot risk.
  • Future switch plan documented for GPT-5.4, Codex, Frontier, or Stateful Runtime when AWS opens access.

FAQ

Is GPT-5.4 or GPT-5.3-Codex available as a public Bedrock model today?

Do not assume so. Treat GPT-5.4, GPT-5.3-Codex, OpenAI Frontier, and Stateful Runtime as separate availability questions. Validate what your account can list and invoke today before promising an architecture.

Is Codex itself running inside Bedrock?

In the current path, Codex uses the amazon-bedrock provider to call an OpenAI model available through Bedrock Mantle. That is not the same as saying the future Codex/Frontier runtime is already provisionable as a Bedrock resource.

How should US and Canadian teams choose a region?

Start with us-east-1, us-east-2, or us-west-2 when the goal is to validate OpenAI GPT OSS and the current Codex provider quickly. For Canadian companies, document whether calls to US regions are allowed; if Canadian residency is mandatory, keep sensitive data out of the pilot until the required model, endpoint, quota, and network path are approved in the required Canadian region.

Not every pilot needs it, but regulated environments should evaluate PrivateLink and endpoint policies early. AWS documents endpoints for bedrock-mantle, bedrock-runtime, and Bedrock control-plane actions.

Do Projects replace separate AWS accounts?

Not completely. Projects help isolate applications inside one account when using OpenAI-compatible APIs on Mantle. AWS accounts remain stronger boundaries for billing, ownership, and governance.

How Elevata helps

This work takes more than a config.toml file. Elevata helps teams validate AWS readiness for Codex and OpenAI agents: account and OU design, IAM/SCP, Bedrock, Mantle, PrivateLink, CloudTrail, budgets, Projects, observability, and developer rollout.

If your team wants to prepare AWS before agent usage spreads, review your AWS readiness with Elevata.

Related

Continue reading

Related reading on this topic.