
AWS Multi-Account Deployment with Azure DevOps Pipeline
Stewart Moreland
Modern cloud deployments require robust CI/CD pipelines that can safely deploy to multiple environments while maintaining security and compliance. This guide demonstrates how to configure Azure DevOps pipelines for automated deployments across multiple AWS accounts (Development, Staging, and Production) using OIDC federation, IAM role assumption, and infrastructure-as-code tools. You'll learn how to eliminate long-lived credentials, implement branch-based deployments, and establish secure multi-account architectures that work with AWS CDK, Terraform, or Amplify.
What You'll Learn
This guide covers IAM role configuration, Azure DevOps service connections with OIDC, multi-stage pipeline setup, branch-to-environment mapping, and security best practices for production-grade AWS deployments.
IAM Permissions and Roles
Multi-Account Architecture
The following diagram illustrates a typical multi-account deployment architecture where Azure DevOps uses OIDC federation to assume dedicated IAM roles in each target AWS account:
Role Setup per AWS Account
In each AWS account (development, staging, production), create an IAM role that the Azure DevOps pipeline will assume during deployments. Each role should be granted the minimal permissions required to deploy your application and infrastructure in that environment (following AWS least-privilege guidelines aws.amazon.com).
Least Privilege Principle
Each IAM role should only have the minimum permissions necessary for deployments in its environment. For example, attach policies for CloudFormation stack operations, Lambda deployments, S3 uploads, etc., as needed by your IaC tool.
Table 1: IAM Role Configuration
| AWS Account (Environment) | IAM Role Name | Trusted Principal (for AssumeRole) | Permissions (Example) |
|---|---|---|---|
| Dev Account (Development) | AzureDevOpsDevRole | Azure DevOps OIDC or CI User Principal | Policy: Allow dev resource creation (e.g. CloudFormation, Amplify/Terraform actions) aws.amazon.com. |
| Staging Account | AzureDevOpsStageRole | Azure DevOps OIDC or CI User Principal | Policy: Allow staging deploy (similar actions but scoped to staging resources). |
| Prod Account (Production) | AzureDevOpsProdRole | Azure DevOps OIDC or CI User Principal | Policy: Restrict to production deployments (e.g. deploy CloudFormation stacks, but no dev/test services). |
Trust Relationships and OIDC Federation
Each IAM role's trust policy must allow Azure DevOps to assume the role. The recommended approach in 2025 is to use OIDC federation instead of static AWS keys aws.amazon.com aws.amazon.com. With OIDC, you establish a trust between the AWS IAM role and Azure DevOps's identity provider:
Recommended: OIDC Federation
OIDC federation eliminates the need for long-lived AWS access keys, significantly improving security. Temporary credentials are automatically generated at runtime and expire after use.
- Azure DevOps OIDC Provider: First, add Azure DevOps as an OIDC identity provider in AWS IAM (the provider URL will be of the form
vstoken.dev.azure.com/<<OrganizationGUID>>). This lets AWS recognize tokens from Azure DevOps aws.amazon.com. - Trust Policy Conditions: Configure the role’s trust policy to allow
sts:AssumeRoleWithWebIdentityfor that OIDC provider, and constrain it to your specific Azure DevOps pipeline or service connection. For example, include a condition on the token’s subject to only allow a specific Azure DevOps service connection ID aws.amazon.com aws.amazon.com. The subject format from Azure DevOps is"sc://{OrgName}/{ProjectName}/{ServiceConnectionName}"aws.amazon.com. By binding to this, even if someone has the OIDC token, only your pipeline’s service connection can assume the role.
If OIDC cannot be used, an alternative is cross-account IAM users and roles. For instance, you might create a dedicated IAM user in a central AWS account (or in each account) with access keys stored in Azure DevOps. This user would have permission (via IAM policy) to assume the deployment roles in target accounts using AWS STS dev.to. Each target account’s role would then trust the AWS account (or user) where the keys reside. While this method works, it involves managing long-lived access keys, so use it only if OIDC federation is unavailable. In either case, the IAM roles in dev, staging, and prod accounts should have identical names or clearly labeled names, and their policies tuned to each environment’s needs. For example, the dev role might allow creating test resources that the prod role does not. Keeping roles and policies environment-specific helps enforce separation between accounts.
Azure DevOps Pipeline Configuration
Setting up the Azure DevOps pipeline involves securely storing AWS credentials, defining multi-stage deployment steps, and configuring triggers for the branch workflow. The pipeline will be defined in YAML to include separate stages for Dev, Staging, and Prod deployments, with conditions to deploy to the correct AWS account based on the branch.
AWS Service Connections & Credentials
Instead of hard-coding AWS keys in the pipeline, use Azure DevOps Service Connections or variable groups to manage credentials securely. Install the AWS Toolkit for Azure DevOps extension, which adds an AWS service connection type aws.amazon.com. Create a service connection for each target AWS account (e.g., one for Dev, one for Staging, one for Prod) aws.amazon.com aws.amazon.com.
AWS Toolkit Installation
Install the AWS Toolkit for Azure DevOps extension from the Visual Studio Marketplace. This provides native AWS service connection types and deployment tasks optimized for Azure Pipelines.
You have three options for credential configuration aws.amazon.com:
-
OIDC AssumeRole (Recommended): If you set up OIDC, configure the service connection to use the Azure Pipelines OIDC token to assume the IAM role in AWS. In the service connection settings, choose Authentication method: OpenID Connect (available in the latest AWS Toolkit) and provide the Role ARN of the AWS IAM role for that environment. No static secret is needed – at runtime, the pipeline obtains a temporary token from AWS STS using OIDC aws.amazon.com aws.amazon.com. Ensure the service connection is scoped only to the needed Azure DevOps project and has the correct role ARN.
-
Access Key + AssumeRole: Alternatively, supply an AWS Access Key ID and Secret for a central IAM user (or an IAM user in the target account), and additionally specify a Role to Assume (the ARN of the env-specific IAM role) in the service connection. The AWS Toolkit will use the static credentials to assume the role at runtime aws.amazon.com. This is useful if not using OIDC; however, secure the access key in Azure DevOps (the service connection stores it encrypted, and you can restrict who can use or edit the connection).
-
Access Key (Direct): As a simpler but less secure fallback, you could use an IAM user’s keys directly in each service connection (one per account) aws.amazon.com. In this case, the pipeline will act with that user’s permissions without an assume-role step. This is straightforward but means managing multiple sets of keys and is not recommended if cross-account roles or federation can be used.
Once service connections are set up (e.g., named AWS-Dev, AWS-Staging, AWS-Prod), you reference them in the pipeline YAML. Azure Pipelines tasks (from the AWS Toolkit) allow specifying the service connection by name. For example, the CloudFormation and CodeDeploy tasks have an awsCredentials field for the service connection name aws.amazon.com aws.amazon.com. For Terraform or CDK, you can either use the AWS CLI task with the service connection or export environment variables from the connection.
Pipeline Stages and Steps
The pipeline will be a multi-stage YAML pipeline, with each stage corresponding to an AWS environment.
Pipeline Flow Visualization
Below is an example YAML structure with three stages and branch-based conditions:
# azure-pipelines.ymltrigger:branches:include:- feat/*- fix/*- release/*- mainpr:branches:include:- feat/*- fix/*- release/*- mainstages:- stage: DevdisplayName: 'Deploy to Development'condition: and(succeeded(), or(startsWith(variables['Build.SourceBranch'], 'refs/heads/feat/'),startsWith(variables['Build.SourceBranch'], 'refs/heads/fix/'),startsWith(variables['Build.SourceBranch'], 'refs/heads/release/'),startsWith(variables['Build.SourceBranch'], 'refs/heads/main')))jobs:- job: DeployDevdisplayName: 'Deploy to Dev AWS Account'steps:# Checkout code, set up environment, etc.- task: AWSCLI@1displayName: 'Assume AWS Dev Role and configure'inputs:awsCredentials: 'AWS-Dev' # Service connection name for DevregionName: 'us-east-1'awsCommand: 'sts get-caller-identity'- script: npm install && npm run builddisplayName: 'Build application'- task: AWSCLI@1displayName: 'Deploy to Dev (CDK Deploy)'env: # Use env vars if needed by CLIAWS_ACCOUNT_ID: '123456789012' # Dev Account ID (if needed)inputs:awsCredentials: 'AWS-Dev'regionName: 'us-east-1'awsCommand: 'cdk deploy --all --require-approval=never'# (If Terraform) use Terraform tasks or CLI similarly, e.g. init/plan/apply# (If Amplify) possibly Amplify CLI commands or AWS CLI to push, etc.- stage: StagingdisplayName: 'Deploy to Staging'condition: and(succeeded(), or(startsWith(variables['Build.SourceBranch'], 'refs/heads/release/'),startsWith(variables['Build.SourceBranch'], 'refs/heads/main')))jobs:- job: DeployStagingsteps:- task: AWSCLI@1displayName: 'Assume AWS Staging Role'inputs:awsCredentials: 'AWS-Staging'regionName: 'us-east-1'awsCommand: 'echo Assume role successful for Staging'# ... Deployment steps for staging (CDK deploy, Terraform apply, etc.)- stage: ProddisplayName: 'Deploy to Production'condition: and(succeeded(),startsWith(variables['Build.SourceBranch'], 'refs/heads/main'))jobs:- job: DeployProdsteps:- task: AWSCLI@1displayName: 'Assume AWS Prod Role'inputs:awsCredentials: 'AWS-Prod'regionName: 'us-east-1'awsCommand: 'echo Assume role successful for Prod'# ... Deployment steps for production
In this YAML:
-
We define CI triggers on the relevant branch patterns (
feat/*,fix/*,release/*,main) so that pushes to those branches start the pipeline learn.microsoft.com. We also define PR triggers on the same patterns so that opening a Pull Request from or into those branches will run the pipeline for validation (note: for Azure Repos, use branch policies for PR validation since YAMLprtriggers apply to GitHub/Bitbucket by default learn.microsoft.com). -
The pipeline has three stages: Dev, Staging, Prod. Each stage uses a
conditionto run only for the appropriate branches. For example, the Dev stage runs for any feature (feat/), fix, release, or main branch build. (In a strict setup, you might limit Dev stage to just feature/fix branches, but here it’s also allowed for others to ensure a full pipeline run in those cases. Adjust according to your needs.) -
Within each stage, the jobs perform deployment to the corresponding AWS account. We use the AWS service connection (
awsCredentials) specific to that environment to authenticate AWS CLI/SDK commands aws.amazon.com. For example, in Dev stage we setawsCredentials: 'AWS-Dev', which behind the scenes either uses the stored access key or OIDC to assume the Dev IAM role. -
The example shows a CDK deployment (
cdk deploy) in Dev stage. For Terraform, you could instead use tasks likeTerraformCLI@0(if using Microsoft’s Terraform extension) or simply runterraform init/plan/applyin script steps. The principle is the same: ensure AWS credentials are active (via the service connection or by exportingAWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEYenvironment variables from a secure variable group) before calling Terraform. For Amplify Gen2 or CDK, you might runnpm ciand then an Amplify CLI command (oramplify push) if appropriate. The pipeline logic doesn’t change with the IaC tool – only the deploy step commands will differ. -
Each stage can also include pre-deployment steps like running tests or building the project artifacts. In the snippet, for brevity, we directly run deploy. In a real scenario, you might have a separate build stage or job that compiles the code and perhaps a step to publish artifacts (though for IaC-driven deployments, often the latest code is directly deployed).
Pipeline Triggers and Flow: With the above configuration, here’s how it works for each scenario:
- When a developer pushes a new feature branch (e.g.,
feat/add-login), the pipeline triggers and runs the Dev stage. It deploys the app to the Dev AWS account (using the Dev role). This allows testing the feature in an isolated dev environment. The Staging and Prod stages are skipped due to the conditions. - When changes are merged into a release branch (e.g.,
release/v1.2), the pipeline runs Dev and Staging stages (per the conditions, Dev stage is not strictly needed here but could run if condition allows; you may refine the logic). Critically, the Staging stage will run, deploying to the Staging AWS account. This simulates a pre-production test of the release. - When code is merged to main, the pipeline runs all stages or at least the Prod stage. In our example, the condition ensures only the Prod stage definitely runs on main. This stage will deploy to the Production AWS account. (You might choose to still run Dev or Staging stages on main as smoke tests; this is configurable via conditions.)
Example YAML Snippets: Below are abbreviated examples of what tasks in each stage might look like for different IaC tools:
-
Using AWS CDK (TypeScript/Python example):
yamlsteps:- script: npm install -g aws-cdk@2.x # Install CDK CLIdisplayName: 'Install CDK'- script: npm ci && npm run builddisplayName: 'Build CDK Application'workingDirectory: 'infra-cdk/' # path to CDK project- task: AWSCLI@1displayName: 'Deploy CDK Stack'inputs:awsCredentials: 'AWS-Dev'regionName: 'us-west-2'awsCommand: 'cdk deploy --all --parameters Env=Dev --require-approval=never'workingDirectory: 'infra-cdk/' -
Using Terraform:
yamlsteps:- task: TerraformInstaller@0 # Ensure Terraform is installed on agentinputs:terraformVersion: '1.5.0'- task: AWSCLI@1displayName: 'Configure AWS Credentials'inputs:awsCredentials: 'AWS-Dev'regionName: 'us-west-2'awsCommand: 'echo AWS creds configured for Dev'- script: terraform init -backend-config="key=dev.tfstate"workingDirectory: 'infra-tf/'- script: terraform plan -var-file="dev.tfvars" -out dev.planworkingDirectory: 'infra-tf/'- script: terraform apply "dev.plan"workingDirectory: 'infra-tf/'In the Terraform example, we assume the AWS service connection exports the needed environment variables for AWS (the AWS CLI task can do this, or use the Terraform task which has an
awsServiceConnectioninput). The Terraform steps initialize and apply the configuration for the dev environment. Similar blocks would exist for staging/prod with different variable files or backend state configurations.
Branch and PR triggers: The YAML trigger ensures CI runs on pushes to the specified branches. The pr trigger ensures that when a Pull Request is opened (for example, a PR from a feature branch into main), the pipeline runs. For PR validations, you typically don’t want to deploy to production or persistent environments until changes are merged. We can handle this by adding conditions to jobs or steps, checking Build.Reason. For instance, you might wrap deployment steps in condition: ne(variables['Build.Reason'], 'PullRequest') so that in a PR build, the pipeline runs through the motions (builds, maybe does a terraform plan or cdk synth), but skips the actual terraform apply or cdk deploy dev.to dev.to. This provides early feedback (linting, unit tests, etc.) without affecting dev/stage accounts on every PR. Once the PR is merged, the push trigger will do the real deployment.
Branching Strategy and Environment Mapping
Your branching strategy is tightly coupled with your deployment pipeline. The goal is to map branches to the corresponding AWS environments so that each push updates the right stage of the application.
Git Branch Workflow
The following diagram illustrates how different Git branches flow through environments:
Table 2: Branch-to-Environment Mapping
| Git Branch Name (Pattern) | Target AWS Account / Environment | Pipeline Deployment Stage |
|---|---|---|
feat/** (feature branches) | Development account (“Dev”) | Dev stage (deploy to dev resources) |
fix/** (bugfix/hotfix branches) | Development account (“Dev”) | Dev stage (similar to feature branches) |
release/** (release prep branches) | Staging account (“Staging”) | Staging stage (deploy release candidate) |
main (primary trunk for prod) | Production account (“Prod”) | Prod stage (deploy to production) |
This setup reflects a common three-tier environment strategy aws.amazon.com. Developers work on feature or fix branches for new features and bug fixes; those branches are deployed to a shared Dev environment in AWS for integration testing. When a set of changes is ready for wider testing, they are merged into a release/x branch, which triggers deployment to the Staging environment – a mirror of production for final QA and user acceptance testing. Finally, merging into main triggers deployment to Production.
Branch-Based Deployment Strategy
This automated approach means developers never manually trigger deployments. All promotions through environments are controlled by Git operations (merges, pushes), providing full audit trails and preventing unauthorized deployments.
Strategy Considerations
-
You can adjust branch mappings based on your team’s Git flow. For example, if you use a
developbranch as an integration branch, you might deploydevelopto Dev, and reservefeat/*for personal dev environments. The concept, however, remains one-to-one mapping: each branch (or branch type) corresponds to one target environment to deploy to. -
Multiple Feature Branches: If multiple feature branches are active, each push will deploy to the Dev account. This could potentially overwrite the previous feature's deployment. In practice, teams often mitigate this by using unique stack names or namespaces per branch (e.g., isolating resources by branch name) or by only allowing one feature test at a time in the shared dev environment. An alternative is to use ephemeral environments (each feature branch deploys to its own temporary environment), but that requires more complex scripting and resource provisioning logic. The above table assumes a simpler shared dev environment.
-
Release and Hotfix branches: In some workflows (like GitFlow), you might also have
hotfix/**branches. These could be mapped to Staging or even Prod depending on policy. In our triggers, we includedfix/**under Dev, assuming those are short-lived fixes going into development. For production hotfix that goes straight to main, one would test it in staging via a release branch or similar. -
Main branch to Prod: The
mainbranch is considered the source of truth for production. Only code that has passed through Dev and Staging (via PRs and release branches) should end up inmain. The pipeline ensures that a push (or merge) to main deploys immediately to the Production AWS account, keeping production in sync with the main codebase.
Production Protection
Implement branch protection rules on main to require pull request reviews and passing status checks before merging. This ensures no code reaches production without proper validation.
This branch-environment linkage, combined with pipeline conditions, means your CI/CD system automatically promotes code through the environments: developers don’t need to manually trigger deployments to dev or staging – it’s all driven by Git events. It also enforces discipline: only by merging to the right branch does code get to the next environment (e.g., you can’t accidentally deploy a feature straight to prod without it going through a release branch).
Security Considerations
Security is paramount when connecting Azure DevOps to AWS. Here are best practices to keep your pipeline secure:
- Eliminate Long-Lived AWS Keys: Prefer OIDC federation or short-lived tokens over static credentials. The Azure DevOps OIDC integration with AWS (available via the AWS Toolkit extension v1.15+ as of late 2024) allows the pipeline to obtain temporary credentials from AWS STS at runtime aws.amazon.com aws.amazon.com. This means there are no hardcoded AWS secrets in Azure DevOps – even if the pipeline YAML or logs are visible, the token cannot be reused (it's short-lived and scoped). Configure the IAM role's trust as described so that only Azure DevOps (and specifically your project's service connection) can assume it aws.amazon.com aws.amazon.com.
OIDC Authentication Flow
The following sequence diagram illustrates how Azure Pipelines obtains temporary AWS credentials using OIDC federation:
- Secure Storage of Secrets: For any sensitive data (access keys, if you must use them; Terraform state access tokens; etc.), use Azure DevOps Variable Groups or Secure Files. In the Terraform example above, a secure file named "credentials" was used to store AWS credentials and downloaded at runtime dev.to dev.to. Azure DevOps ensures these values are masked in logs and not exposed to users without permission.
Never Commit Secrets
Do not commit secrets to the repo or pipeline YAML. Always use Azure DevOps Variable Groups or Key Vault integration. Restrict who can edit the pipeline or service connection – only authorized users or build service accounts should access production credentials.
-
Least Privilege IAM Policies: Each AWS IAM role for deployment should have the minimal set of permissions. For example, if deployments are done via CloudFormation stacks, the role might only need CloudFormation actions, S3 (for templates/artifacts), and whatever specific services your stacks create (e.g., if your stack includes Lambda and DynamoDB, the role might need Lambda deploy and DynamoDB create table permissions). Avoid policies like AWSAdministratorAccess for CI roles in production. In the dev account, you might allow broader access if it’s a sandbox, but in prod, lock it down. This way, even if the pipeline or credentials were compromised, the blast radius is limited aws.amazon.com.
-
AWS Multi-Factor & Access Controls: Azure DevOps service connections (especially using keys) won’t use MFA, so compensate with tight IAM policies and trust conditions. If using cross-account keys, consider limiting the IAM user’s rights – e.g., the user can only assume specific roles and nothing else dev.to. In the trust policy of the target roles, specify the user’s ARN as a principal and perhaps source IP conditions if your agents have fixed egress IPs (though that’s often not the case for cloud agents).
-
Pipeline Approvals & Checks: Azure DevOps allows adding approvals or manual intervention steps. It's wise to require a manual approval before deploying to Production. In YAML, this can be achieved by using an Environment with an approval gate or by inserting a
ManualValidation@0task (as seen in the Terraform pipeline example, where a manual approval job was used before apply) dev.to dev.to. For instance, you might mark the Prod stage to require an approval from a tech lead after the Staging stage succeeds. This prevents changes from automatically going to prod without human oversight, even if the main branch is updated.
Production Approvals
Configure Azure Pipelines Environments with approval gates for production deployments. Require sign-off from designated approvers (tech leads, security team) before any production changes are applied.
-
Audit and Logging: Enable CloudTrail in AWS to log assume-role events and deployments triggered by the pipeline. This helps in tracing which Azure DevOps job assumed the role and what changes were made. Azure DevOps logs will show which steps ran, and you can correlate those with AWS logs. Also, ensure that any credentials or tokens output by the pipeline (like Terraform plan output) do not contain secrets. Use the
captureAsSecuredVarsor similar settings in AWS Toolkit tasks if capturing stack outputs that are sensitive aws.amazon.com. -
Rotate Credentials: If you use static AWS access keys (for example, in a pinch where OIDC isn’t possible), rotate them regularly. Azure DevOps doesn’t auto-rotate stored secrets, so you’ll need a process to create a new key in AWS, update the service connection, and invalidate the old key. Using short-lived credentials (OIDC or assume role via STS) largely obviates this concern by design aws.amazon.com aws.amazon.com.
Conclusion
By following these practices – isolating accounts per environment, using Azure DevOps service connections with assume-role, gating production deploys, and securing secrets – you establish a robust, secure CI/CD pipeline. The pipeline will accelerate deployments to AWS while respecting the security boundaries between dev, staging, and prod, all in an infrastructure-as-code manner agnostic of whether you use AWS Amplify, CDK, Terraform, or other IaC frameworks.
Key Takeaways
Each environment's infrastructure can be managed by the IaC tool of choice, but the promotion and deployment logic remains consistent and centrally managed in Azure DevOps aws.amazon.com. This approach (multi-account AWS with Azure DevOps) is well-supported as of 2025 and leverages the latest best practices for cloud deployment and DevOps integration.
Production-Ready Architecture
This multi-account deployment strategy with OIDC federation represents the current industry best practice for secure, automated cloud deployments. It provides the foundation for scalable, compliant, and maintainable infrastructure operations.
Sources
- Bruneau, M. How to federate into AWS from Azure DevOps using OpenID Connect – AWS Blog (Dec 2024) aws.amazon.com aws.amazon.com aws.amazon.com
- Singh, R. Use OpenID Connect with AWS Toolkit for Azure DevOps to perform deployments – AWS DevOps Blog (Jul 2024) aws.amazon.com aws.amazon.com
- AWS DevOps Blog. Use the AWS Toolkit for Azure DevOps to automate deployments (2020) aws.amazon.com aws.amazon.com aws.amazon.com
- Sathish Kumar. Terraform in AWS multi-account via Azure DevOps pipelines – Dev.to (Sep 2022) dev.to dev.to dev.to
- Microsoft Docs – Azure Pipelines YAML schema: Triggers (2023) learn.microsoft.com learn.microsoft.com