Most teams start their AWS deployment pipeline the same way: create an IAM user, generate an access key, paste the key into GitHub Secrets, and move on. It works, but you’ve just created a long-lived credential with blast radius across your AWS account that lives indefinitely in a third-party system. When that key leaks - and it does - the cleanup is painful.
OpenID Connect (OIDC) eliminates that pattern entirely. Instead of storing secrets, your GitHub Actions workflow requests a short-lived token from GitHub’s identity provider, exchanges it for temporary AWS credentials via STS, and deploys. No keys stored anywhere. This article walks through why to use GitHub Actions for this over AWS-native tooling, how OIDC works mechanically, and the full step-by-step setup.
Why GitHub Actions Over AWS Native CD Tools
AWS has its own CI/CD suite: CodePipeline for orchestration, CodeBuild for build/test execution, and CodeDeploy for deployment. The combination can do everything GitHub Actions can, but the developer experience is substantially worse for most teams, particularly those already living in GitHub.
| GitHub Actions | CodePipeline + CodeBuild | |
|---|---|---|
| Config format | YAML in your repo | JSON/YAML via CDK or Console |
| Source of truth | Your Git repo | Separate AWS resource |
| Triggers | Any GitHub event (push, PR, label, schedule, etc.) | Push to CodeCommit, S3, ECR, or GitHub (polling or webhook) |
| Marketplace | 20,000+ community actions | Limited native integrations |
| Cost | Free for public repos; 2,000 min/month free for private | Pay per build minute + pipeline execution |
| Multi-cloud / SaaS | First-class (Slack, Vercel, Terraform Cloud, etc.) | AWS-centric |
| PR feedback | Native: comments, check runs, status badges | Requires custom Lambda or SNS to mirror status back |
| Secrets management | GitHub Secrets + OIDC | AWS Secrets Manager or SSM Parameter Store |
The killer argument for GitHub Actions is colocation. Your workflow YAML lives next to your application code, changes in the same PR, gets reviewed in the same diff, and is rolled back with the same git revert. With CodePipeline, your pipeline definition lives as a separate CloudFormation stack or CDK app. Infrastructure drift between your app and your pipeline becomes a real problem over time.
That said, there are legitimate reasons to reach for AWS-native tooling. If your organization mandates that all infrastructure stays inside AWS accounts (e.g., for compliance reasons), or if you’re deploying from CodeCommit, CodePipeline is the natural fit. For everyone else - GitHub Actions is the better starting point.
What Is OIDC and Why Not Access Keys
IAM access keys are long-lived credentials: an access key ID and a secret that never expire unless you explicitly rotate or delete them. They have three problems:
- Storage risk - you have to put them somewhere (GitHub Secrets, environment variables, a
.envfile someone commits by accident). - Rotation burden - rotating them means updating every pipeline and environment that uses them, coordinated across teams.
- Blast radius - a leaked key keeps working until someone notices and revokes it. Depending on what permissions the IAM user has, that window can be catastrophic.
OIDC solves all three. GitHub acts as an identity provider (IdP) and issues a signed JWT to each workflow run. AWS is configured to trust that IdP and exchange the JWT for temporary STS credentials that expire in as little as 15 minutes. No keys are stored anywhere - not in GitHub, not in your environment, not in a .env file.
The trust is also fine-grained. You can configure an IAM role to only be assumed by workflows running from a specific GitHub org, repo, branch, or environment. A token generated by my-org/my-repo on branch main cannot assume a role scoped to my-org/other-repo or to a pull request branch.
How the OIDC Flow Works
Here’s what happens when a GitHub Actions workflow runs that needs AWS credentials:
-
The runner requests a token - When the job reaches the
aws-actions/configure-aws-credentialsstep, the GitHub Actions runner requests a signed OIDC JWT from GitHub’s token endpoint (https://token.actions.githubusercontent.com). -
GitHub issues the JWT - The token is signed by GitHub’s OIDC provider and contains claims about the workflow run: the repository, branch, triggering event, environment, and a
sub(subject) claim that uniquely identifies the run context (e.g.,repo:my-org/my-repo:ref:refs/heads/main). -
The action calls AWS STS - The action calls
sts:AssumeRoleWithWebIdentity, passing the JWT and the ARN of the IAM role you want to assume. -
AWS validates the token - STS fetches GitHub’s public JWKS endpoint to verify the JWT signature. It then checks the role’s trust policy to confirm the
sub,aud, and other claims match the conditions you’ve defined. -
STS returns temporary credentials - If everything checks out, STS returns an
AccessKeyId,SecretAccessKey, andSessionTokenthat expire in 1 hour by default (configurable down to 15 minutes). -
The rest of the job uses those credentials - The action exports the credentials as environment variables. Every subsequent AWS CLI command, CDK deploy, or SDK call in that job automatically picks them up.
The JWT never touches your GitHub Secrets. The temporary credentials never need to be stored anywhere.
Setting Up the OIDC Identity Provider in AWS
Before an IAM role can trust GitHub’s tokens, you need to register GitHub as an OIDC identity provider in your AWS account. You only do this once per account.
Using the AWS Console
- Go to IAM → Identity providers → Add provider.
- Select OpenID Connect.
- Set the Provider URL to
https://token.actions.githubusercontent.com. - Click Get thumbprint - AWS fetches and pins the TLS certificate.
- Set the Audience to
sts.amazonaws.com. - Click Add provider.
Using CDK (Python)
If you manage your AWS infrastructure with CDK, register the provider in the same stack as your deployment role:
# infra/stacks/github_oidc_stack.py
from aws_cdk import Stack
from aws_cdk import aws_iam as iam
from constructs import Construct
class GitHubOidcStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
self.github_provider = iam.OpenIdConnectProvider(
self,
"GitHubOidcProvider",
url="https://token.actions.githubusercontent.com",
client_ids=["sts.amazonaws.com"],
)
Note: AWS CDK and the newer AWS SDKs can auto-fetch the thumbprint. You do not need to hardcode it. If you’re using CloudFormation directly, you must provide the thumbprint manually - fetch the current value from
https://token.actions.githubusercontent.com/.well-known/openid-configuration.
Creating the IAM Role
The IAM role is what your GitHub Actions workflow actually assumes. It needs two things: a trust policy that specifies who can assume it, and a permissions policy that specifies what they can do with it.
Trust Policy
The trust policy uses the sub claim to restrict which workflows can assume the role. The subject format GitHub uses is:
repo:{org}/{repo}:ref:refs/heads/{branch}
For pull requests it looks like repo:{org}/{repo}:pull_request. You can also scope to a GitHub Environment with repo:{org}/{repo}:environment:{env-name}.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
"token.actions.githubusercontent.com:sub": "repo:my-org/my-repo:ref:refs/heads/main"
}
}
}
]
}
Important: Never use a wildcard (
*) for thesubcondition. A wildcard allows any workflow in any repo in your org to assume this role. Always scope to a specific repo and branch.
Permissions Policy
Grant the role only what it needs. Here are least-privilege examples for the three most common deployment targets:
S3 static site:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
"Resource": [
"arn:aws:s3:::my-site-bucket",
"arn:aws:s3:::my-site-bucket/*"
]
},
{
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::123456789012:distribution/ABCDEFGHIJKLMN"
}
]
}
Lambda + CDK deploy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudformation:*",
"lambda:*",
"iam:PassRole",
"iam:GetRole",
"s3:*"
],
"Resource": "*"
}
]
}
Note: CDK deployments require broader permissions because CDK synthesizes CloudFormation templates that may create or update IAM roles. Scope these down by resource ARN where you can, or use CDK bootstrap’s boundary policy to constrain what the CDK execution role can do.
ECR image push:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:PutImage"
],
"Resource": "*"
}
]
}
IAM Role in CDK (Python)
# infra/stacks/github_oidc_stack.py
from aws_cdk import Stack
from aws_cdk import aws_iam as iam
from constructs import Construct
class GitHubOidcStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
github_provider = iam.OpenIdConnectProvider(
self,
"GitHubOidcProvider",
url="https://token.actions.githubusercontent.com",
client_ids=["sts.amazonaws.com"],
)
deploy_role = iam.Role(
self,
"GitHubDeployRole",
assumed_by=iam.WebIdentityPrincipal(
github_provider.open_id_connect_provider_arn,
conditions={
"StringEquals": {
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
"token.actions.githubusercontent.com:sub": "repo:my-org/my-repo:ref:refs/heads/main",
}
},
),
role_name="GitHubActionsDeployRole",
max_session_duration=Duration.hours(1),
)
deploy_role.add_to_policy(
iam.PolicyStatement(
actions=["s3:PutObject", "s3:DeleteObject", "s3:ListBucket"],
resources=[
"arn:aws:s3:::my-site-bucket",
"arn:aws:s3:::my-site-bucket/*",
],
)
)
Configuring the Workflow
With the OIDC provider and IAM role in place, the workflow configuration is straightforward. The two critical pieces are the permissions block and the aws-actions/configure-aws-credentials step.
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches:
- main
permissions:
id-token: write # required to request the OIDC JWT
contents: read # required to checkout the repo
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GitHubActionsDeployRole
aws-region: us-east-1
- name: Deploy
run: |
# your deployment commands here
Key fields:
permissions.id-token: write- this is non-negotiable. Without it, the runner cannot request an OIDC token from GitHub and the step will fail with a permission error.permissions.contents: read- required foractions/checkout. If you declare anypermissionsblock, all other permissions default tonone, so you must be explicit.role-to-assume- the full ARN of the IAM role you created. Store this in a GitHub Actions variable (not a secret - it’s not sensitive) or hard-code it.aws-region- required. The region is not inferred from the role ARN.
Deployment Examples
Static Site to S3 and CloudFront
# .github/workflows/deploy.yml
name: Deploy Static Site
on:
push:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.AWS_DEPLOY_ROLE_ARN }}
aws-region: us-east-1
- name: Build site
run: npm ci && npm run build
- name: Sync to S3
run: |
aws s3 sync ./dist s3://my-site-bucket \
--delete \
--cache-control "public, max-age=31536000, immutable"
- name: Invalidate CloudFront cache
run: |
aws cloudfront create-invalidation \
--distribution-id ${{ vars.CLOUDFRONT_DISTRIBUTION_ID }} \
--paths "/*"
The --delete flag removes files from S3 that no longer exist in your build output, keeping the bucket in sync with the latest build. The --cache-control header tells browsers and CloudFront edge nodes to cache assets aggressively - set a shorter TTL for HTML files if needed.
Lambda Function with CDK
For Lambda deployments using CDK, see CI/CD for Lambda Functions with GitHub Actions which covers the full workflow including running tests, caching dependencies, and deploying with cdk deploy.
Docker Image to ECR
# .github/workflows/deploy.yml
name: Build and Push to ECR
on:
push:
branches:
- main
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.AWS_DEPLOY_ROLE_ARN }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: my-app
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $REGISTRY/$REPOSITORY:$IMAGE_TAG .
docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG
docker tag $REGISTRY/$REPOSITORY:$IMAGE_TAG $REGISTRY/$REPOSITORY:latest
docker push $REGISTRY/$REPOSITORY:latest
Tagging with github.sha gives you an immutable, traceable tag for every image. The latest tag is a convenience for services that pull the newest image on restart, but you should reference the SHA tag in any deployment manifests (ECS task definitions, Kubernetes pods, etc.) for reproducibility.
Multi-Environment Deployments
Real deployments need more than one environment. The cleanest pattern is to use separate IAM roles per environment (scoped to different branches) combined with GitHub Environments for production gating.
Branch-Based Role Assumption
Create a separate IAM role for each environment with a trust policy scoped to the matching branch:
| Environment | Branch | IAM Role |
|---|---|---|
| Staging | develop | GitHubActionsDeployRole-Staging |
| Production | main | GitHubActionsDeployRole-Production |
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches:
- main
- develop
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ github.ref_name == 'main' && 'production' || 'staging' }}
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ github.ref_name == 'main' && vars.AWS_PROD_ROLE_ARN || vars.AWS_STAGING_ROLE_ARN }}
aws-region: us-east-1
- name: Deploy
run: |
# deployment commands
GitHub Environments for Production Approval
In your GitHub repository settings, go to Settings → Environments → production and add a Required reviewer. When a workflow run targets the production environment, GitHub will pause it and wait for an approved reviewer before executing. This gives you a manual gate on every production deployment without needing a separate approval step in your pipeline code.
The IAM trust policy for the production role can also be scoped to the GitHub Environment rather than (or in addition to) the branch:
"token.actions.githubusercontent.com:sub": "repo:my-org/my-repo:environment:production"
This means a workflow can only assume the production role if it’s running within the production GitHub Environment - not just from the main branch in an ad-hoc workflow.
Security Best Practices
- Always scope
subto a specific repo and branch - never use*. A wildcard trust policy lets any workflow in your org (or worse, a fork) assume the role. - Use separate IAM roles per environment - your staging deploy role should never have permission to touch production resources, even if someone manages to trigger it on the wrong branch.
- Use GitHub Environments for production gates - required reviewers give you a human approval step without any extra infrastructure.
- Prefer
varsoversecretsfor role ARNs - role ARNs are not sensitive. Storing them as variables (not secrets) makes them visible in the workflow UI and easier to audit. Reserve GitHub Secrets for things that are actually secret. - Set
max-session-durationto the minimum needed - the default is 1 hour. If your deploy takes 5 minutes, set it to 15 minutes. Shorter session windows limit the damage if a token is somehow captured mid-flight. - Audit with CloudTrail - every
AssumeRoleWithWebIdentitycall and subsequent API call is logged. Set up a CloudTrail alert for unexpected role assumptions from the GitHub OIDC provider.
The Takeaway
Replacing IAM access keys with GitHub OIDC is one of the highest-leverage security improvements you can make to a deployment pipeline. It eliminates a class of credential exposure risk entirely, requires no ongoing key rotation, and gives you fine-grained control over which workflows can deploy to which environments. Combined with GitHub Actions’ workflow-as-code model - where your pipeline lives in the same repo as your application - you get a deployment setup that is auditable, reviewable, and straightforward to operate. Once you’ve done this setup once, it’s easy to replicate across projects by templating the CDK stack and copying the workflow YAML.