Terraform 1.7 state management across multi-region AWS deployments
We're managing infrastructure across us-east-1, eu-west-1, and ap-southeast-1 with Terraform 1.7. Currently using S3 + DynamoDB for state locking, but running into race conditions during parallel terraform apply runs.
Has anyone successfully implemented remote state backends with proper locking for multi-region workflows? Considering:
- Terraform Cloud (vs self-hosted)
- S3 bucket versioning + DynamoDB TTL tweaks
- Custom lock mechanisms via Lambda
Our pipeline runs ~50 applies/day across regions. Looking for battle-tested solutions that scale without constant troubleshooting. Cost/complexity trade-offs welcome.
What's your setup?
Edited at 26 Mar 2026, 08:20
Terraform Cloud is worth it if you're hitting race conditions at scale—the state locking is rock solid and you get built-in cost estimation as a bonus. That said, if you want to stick with S3, the real issue is usually DynamoDB TTL interfering with locks. Make sure your lock items don't have TTL set (counterintuitive, I know). Also consider using terraform_remote_state data sources with explicit depends_on to serialize applies across regions—50 parallel runs might actually be the problem, not the locking itself. What's your current DynamoDB provisioning looking like?
Thanks for the insight! Yeah, Terraform Cloud's built-in locking does sound appealing for our scale. Quick question though—how's the latency for you across regions? We're worried about apply times ballooning with the extra API calls to TC.
Have you looked into Terraform's -parallelism flag? We were seeing similar race conditions with 50 concurrent applies, but lowering it to -parallelism=5 eliminated most of the DynamoDB lock contention. Also, make sure your DynamoDB table has on-demand billing instead of provisioned—way better for bursty multi-region workflows. Terraform Cloud is solid, but sometimes the issue is just hammering your backend too hard.
Before jumping to Terraform Cloud, check if your DynamoDB table has sufficient provisioned capacity or if you're on-demand pricing. We hit similar lock contention at 50 concurrent applies—turned out our table was throttling. Also, consider using separate S3 buckets per region instead of one global bucket; cross-region locks add latency. If you do stick with S3+DynamoDB, enable skip_credentials_validation = false in your backend config and add explicit depends_on chains to stagger applies by region.