Elevata

Article

OpenAI on Amazon Bedrock: Codex, GPT-5.5, Managed Agents & AWS Setup

Paulo Frugis
View profilePublished April 23, 202611 min read

This guide covers what is available now, how to request access, and how to prepare your AWS foundation.

Last verified: April 28, 2026.

What is available, limited preview, or account-dependent?

CapabilityStatus on April 28, 2026How to verifyProduction caveat
OpenAI gpt-oss on BedrockPublicly documented in Bedrock, with separate Mantle and Runtime model IDs.Validate with list-foundation-models, the Bedrock console, endpoint, region, quota, and a real invocation.Best path for proving controls, but not the same as access to every frontier or Codex capability.
OpenAI GPT-5.5 and frontier models on BedrockOpenAI announced OpenAI models, including GPT-5.5, on AWS in limited preview; usage applies toward existing AWS cloud commitments for eligible customers.Confirm eligibility, region, endpoint, quota, terms, and whether the model appears in your AWS environment.Limited preview is not GA. Do not design production until account access is confirmed.
Codex on AWSCodex is used by more than 4 million people weekly. It is announced in limited preview on AWS, starting with CLI, desktop app, and VS Code extension using Bedrock as the provider; usage applies toward existing AWS cloud commitments for eligible customers.Validate Codex version, supported surface, AWS credentials, endpoint, model, region, and repository policy.Do not release against sensitive repositories until permissions, logs, cost, and human review are defined.
Amazon Bedrock Managed Agents powered by OpenAILimited preview. Built with the OpenAI agent harness, engineered for faster execution, sharper reasoning, and reliable steering of long-running tasks. Each agent has per-agent identity and logs every action for auditability.Request/validate preview access, region, features, identity, tools, logs, quotas, support path, and AgentCore configuration.Bedrock AgentCore is the default compute environment for Managed Agents, with authorization policy enforcement, agent and tool discovery, and observability at scale. Do not treat preview as production approval until terms and controls are validated.
Direct OpenAI APIAvailable outside Bedrock for OpenAI-native capabilities.Review procurement, data path, authentication, logs, cost, and internal policy.May not satisfy the same AWS controls, commitments, or residency requirements.

What changed on April 28?

OpenAI announced three limited-preview areas on AWS: OpenAI models on Bedrock, Codex on AWS, and Amazon Bedrock Managed Agents powered by OpenAI. The announcement follows the April 27 Microsoft/OpenAI amendment, which made cross-cloud serving possible by allowing OpenAI to serve products on any cloud while making Microsoft's license non-exclusive through 2032. This positions AWS as a real path for OpenAI models, agents, and tools inside the security, governance, billing, and procurement workflows AWS customers already use.

But the distinction matters: a limited-preview announcement is not general availability in every account. Before production planning, validate that the capability exists in your account, in the right region, with approved endpoint, quota, terms, logging, IAM, cost controls, and data policy.

Which OpenAI-on-AWS path should you use?

PathUse it whenWatch out for
gpt-oss on BedrockYou need a documented path to validate IAM, endpoints, budgets, Mantle/Runtime, and developer workflow now.Do not confuse it with general access to GPT-5.5, Codex, or Managed Agents.
Codex on BedrockYou want developers to use Codex through AWS credentials, billing, and infrastructure.Confirm preview access, version, supported surface, region, allowed repositories, and tool boundaries.
OpenAI frontier models on BedrockYou want latest model capability inside AWS controls and procurement.Treat it as preview-dependent until the model appears and works in your account.
Bedrock Managed AgentsYou need agents with long-running tasks, identity, memory, tools, governance, and AWS integration.Validate preview access, AgentCore, audit, quotas, regions, terms, and support before production.
Direct OpenAI APIYou need OpenAI-native features outside the AWS path.Procurement, data path, residency, logs, and AWS commitments may differ.

Should your team start now or wait?

SituationRecommendationWhy
You want to validate gpt-oss or the Bedrock provider for CodexStart now in an isolated account or sandbox OU.You can test credentials, endpoints, budgets, CloudTrail, IAM, quotas, and workflow without production commitment.
You want GPT-5.5, preview Codex, or Managed AgentsPrepare the platform and validate access before production commitment.The status is limited preview; what matters is what is available in your account.
You will release to multiple developersAdd guardrails before rollout.Coding agents can touch sensitive repositories, create costly loops, and make usage attribution hard.
You have residency, audit, or regulated-data requirementsRun a readiness review before the pilot.The risk is data, region, logs, egress, preview terms, and governance, not just the config file.

AWS account strategy for Codex and agent pilots

Do not start in production just because it is convenient. The AWS account boundary remains one of the simplest ways to control billing, permissions, CloudTrail, Service Control Policies, and blast radius.

ModelWhen to use itMinimum controls
New AWS accountStartup, technical lab, or validation without an existing AWS environment.Corporate email or secure list for root, MFA, alternate contacts, IAM Identity Center, and a monthly budget before the first test.
AWS Organizations member accountCompany with Organizations, Control Tower, or a landing zone.Sandbox or AI platform OU, regional/model SCPs, centralized CloudTrail, account budget, and separate permission sets.
AI platform accountTeams operating Bedrock for multiple applications.Projects by application, cost tags, clear owners, IAM review, and a promotion path to production.
Existing production accountOnly after the path has been validated.Change management, endpoint policy, CloudTrail, service budget, rollback tests, and compliance review.

Region and data residency for US and Canadian teams

For US teams, us-east-1, us-east-2, and us-west-2 are practical validation regions for OpenAI gpt-oss on Bedrock because they are listed in the AWS model documentation. Use them to prove IAM, endpoint behavior, budgets, CloudTrail, PrivateLink, and Codex workflow before expanding.

For Canadian organizations, latency is secondary to data residency and policy approval. Document whether a pilot may invoke models in a US region while usage remains bounded by account, IAM, logs, and data policy. If Canadian residency is mandatory, keep sensitive data out of the pilot until the required model, endpoint, quota, and network path are available and approved in the required Canadian region.

Bedrock Mantle vs. Bedrock Runtime vs. Managed Agents

The most common implementation error is mixing model ID, endpoint, and supported feature. AWS documents different paths for gpt-oss-120b:

PathBest fitEndpoint / IDRelevant features
Bedrock MantleCodex, Responses API, Chat Completions, and OpenAI-compatible clients.https://bedrock-mantle.{region}.api.aws/v1
openai.gpt-oss-120b
AWS recommends Mantle when possible for this model; the card shows tool calling and Projects support.
Bedrock RuntimeAWS SDK, InvokeModel, streaming, and Converse.https://bedrock-runtime.{region}.amazonaws.com
openai.gpt-oss-120b-1:0
The card shows response streaming and Bedrock Guardrails, but not Agents/Flows/Knowledge Bases for this path.
Bedrock Managed Agents powered by OpenAIAgents with state, memory, tools, identity, audit, and long-running tasks.Preview; validate access and details in your account.AWS positions Managed Agents with AgentCore as the default runtime environment and integrated governance controls.

IAM, SSO, and least-privilege access

Start with IAM Identity Center. Each person should use temporary credentials refreshed through SSO, not permanent access keys on laptops. Create one administrative permission set for bootstrap and another restricted permission set for Codex/Bedrock users.

aws configure sso --profile codex-bedrock
aws sso login --profile codex-bedrock
aws sts get-caller-identity --profile codex-bedrock

For local validation through Runtime, a restrictive starting point is to allow model discovery and invocation only for approved models:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "bedrock:ListFoundationModels",
      "Resource": "*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "bedrock:InvokeModel",
        "bedrock:InvokeModelWithResponseStream"
      ],
      "Resource": [
        "arn:aws:bedrock:*::foundation-model/openai.gpt-oss-120b-1:0",
        "arn:aws:bedrock:*::foundation-model/openai.gpt-oss-20b-1:0"
      ]
    }
  ]
}

The policy above validates the Runtime path only. For the Mantle path used by OpenAI-compatible clients, review Mantle permissions. For a pilot, the AWS managed policy AmazonBedrockMantleInferenceAccess is a quick starting point. For production, scope access by Project, region, model, and owner.

Configure Codex with the amazon-bedrock provider

Use the latest Codex version. The Codex changelog documents the amazon-bedrock provider and Bedrock support for OpenAI-compatible providers. For the current AWS profile/SigV4 path, use a recent version and validate the surface in scope: CLI, desktop app, or VS Code extension.

npm install -g @openai/codex@latest
codex --version

In ~/.codex/config.toml, use the Mantle model ID, not the Runtime model ID:

model = "openai.gpt-oss-120b"
model_provider = "amazon-bedrock"

[model_providers.amazon-bedrock.aws]
profile = "codex-bedrock"

The config file is the easy part. Before pointing Codex at monorepos or sensitive code, define allowed repositories, prohibited data, human approvals, logs, budget, Codex version, and incident owner.

Validate before developer rollout

aws bedrock list-foundation-models \
  --profile codex-bedrock \
  --region us-east-1 \
  --query "modelSummaries[?providerName=='OpenAI'].[modelId,modelName,modelLifecycle.status]" \
  --output table
  • Confirm the model appears in the selected region.
  • Confirm Mantle and Runtime endpoint behavior separately when using both.
  • Confirm the SSO profile expires and refreshes correctly.
  • Run a short task against a low-risk repository.
  • Check CloudTrail, budgets, logs, and alarms after the test.
  • Document Codex version, AWS CLI version, region, endpoint, model, and validation date.

Cost controls, Projects, and usage attribution

Install cost controls before the first pilot. In AWS Budgets, create a monthly account budget with alerts at 50%, 80%, and 100% actual spend, plus 100% forecasted spend. Send alerts to a FinOps or platform list, not to a single person.

ControlWhen to use itNote
Account budgetEvery pilot.Protects against loops and unexpected usage in an isolated account.
Amazon Bedrock service-filtered budgetEvery Bedrock pilot.Separates AI consumption from the rest of AWS spend.
Cost Anomaly DetectionTeams with recurring usage.Flags unusual patterns before month-end.
Projects APIApplications using OpenAI-compatible APIs on Bedrock Mantle.AWS positions Projects as application-level isolation with tags, cost tracking, and observability.
Separate accounts or tagsMulti-team environments.Prevents experiments, development, and production from blending together.

Security, audit, and private networking

The security review needs to cover who can invoke models, which regions are allowed, what data enters prompts, how logs are retained, how spend is limited, and which network path is used.

  • CloudTrail: enable centralized trails and review Bedrock events from day one.
  • SCPs: block regions or models outside company policy, with controlled exceptions for the pilot account.
  • PrivateLink: AWS documents interface endpoints for bedrock, bedrock-runtime, and bedrock-mantle, including private DNS for bedrock-mantle.{region}.api.aws.
  • Endpoint policies: apply policies when traffic must stay restricted to approved actions and resources.
  • Sensitive data: start with low-risk repositories and tasks before allowing personal, customer, or production data.

Troubleshooting common setup failures

SymptomLikely causeFix
model not foundRuntime ID used on Mantle, or Mantle ID used on Runtime.Use openai.gpt-oss-120b on Mantle and openai.gpt-oss-120b-1:0 on Runtime.
AccessDeniedExceptionPermission set, SCP, preview access, or endpoint policy blocking Bedrock.Review IAM, SCPs, region, model, Project, preview, and ARN.
Codex ignores BedrockOld version or wrong model_provider.Upgrade Codex and confirm model_provider = "amazon-bedrock".
SSO works in the terminal but fails in CodexDifferent profile or expired session.Run aws sso login --profile codex-bedrock and confirm the same profile in config.
Works in one region, fails in anotherModel, preview access, endpoint, quota, or PrivateLink path not validated there.Validate list-foundation-models, quotas, endpoints, and terms in the target region.
Cost rises with no clear ownerNo tags, Projects, isolated account, or service budget.Add a Bedrock budget, Cost Anomaly Detection, and attribution by account, tag, or Project.

Readiness checklist

  • New AWS account or member account created for the pilot.
  • Root MFA, alternate contacts, and IAM Identity Center configured.
  • Region decision documented for US regions and Canadian residency requirements.
  • OpenAI models listed and invoked in the selected region.
  • Preview status validated for GPT-5.5, Codex, or Managed Agents when relevant.
  • Correct Mantle and Runtime model IDs documented.
  • Codex updated and supported surface defined: CLI, desktop app, or VS Code.
  • Local SSO profile working with temporary credentials.
  • Account budget, Bedrock budget, and shared alerts enabled.
  • CloudTrail, SCPs, PrivateLink, and endpoint policies reviewed for pilot risk.
  • Repository policy, sensitive-data policy, egress, and human review approved.
  • Rollback plan and operational owner defined before wider rollout.

FAQ

Is GPT-5.5 available on Amazon Bedrock?

OpenAI announced OpenAI models, including GPT-5.5, on Amazon Bedrock in limited preview. That is not general availability in every account. Confirm eligibility, region, endpoint, quota, terms, and whether the model appears in your AWS environment before designing production.

Is Codex available on AWS?

OpenAI announced Codex on AWS in limited preview, starting with CLI, desktop app, and VS Code extension configured to use Bedrock. For real usage, validate access, version, region, model, credentials, repositories, and data policy.

Does Managed Agents powered by OpenAI replace Bedrock Agents or AgentCore?

Do not treat it as a generic replacement. AWS positions Bedrock Managed Agents powered by OpenAI as a limited-preview offering designed to work with AgentCore. Validate use case, availability, governance, identity, tools, quotas, and terms before choosing the architecture.

Is gpt-oss on Bedrock the same as using GPT-5.5?

No. gpt-oss is the publicly documented OpenAI path on Bedrock for technical validation today. GPT-5.5 and frontier models are part of the limited-preview announcement and should be treated as account-verification questions.

How should US and Canadian teams choose a region?

Use US regions for low-risk validation when policy allows it. For Canadian residency requirements, keep sensitive data out of the pilot until the required model, endpoint, quota, and network path are approved in the required Canadian region.

Not every pilot needs it, but regulated environments should evaluate PrivateLink and endpoint policies early. AWS documents endpoints for bedrock, bedrock-runtime, and bedrock-mantle.

Do Projects replace separate AWS accounts?

Not completely. Projects help isolate applications inside one account when using OpenAI-compatible APIs on Mantle. AWS accounts remain stronger boundaries for billing, ownership, and governance.

How Elevata helps

This work takes more than a config.toml file. Elevata helps platform, security, and engineering teams validate OpenAI-on-AWS readiness: account and OU design, IAM/SCP, Bedrock access, Mantle vs Runtime, Managed Agents, PrivateLink, CloudTrail, Budgets, Projects, observability, residency, and developer rollout.

If your team wants to prepare AWS before OpenAI usage spreads, validate AWS readiness with Elevata.

Related

Continue reading

Related reading on this topic.