AWS bill shock: $12k spike after moving to on-demand instances
So we migrated a batch of workloads from reserved instances to on-demand last month to test some burstable traffic patterns. Thought we'd be careful with it, but somehow we got hit with a $12k bill increase.
Turned out some automation was spinning up way more instances than expected during off-peak hours, and we weren't catching it because our alerts were set on commitment-based budgets, not actual spend.
Has anyone else had this? What's your strategy for preventing runaway on-demand costs? We're looking at switching back to a hybrid model with reserved + spot, but I want to make sure we're not just papering over a monitoring gap.
Also—AWS support's cost optimization team hasn't been particularly helpful. Just saying.
Edited at 26 Mar 2026, 14:09
Ouch. Pro tip: set up AWS Budgets with actual cost alerts (not just forecasts), and pair it with an SNS notification that triggers a Lambda to auto-scale down or pause instances if you breach. Also check CloudTrail to see exactly which automation spawned those instances—bet there's a runaway loop somewhere.
Fwiw, we switched to spot instances for our non-critical batch work and kept on-demand only for baseline capacity. Saved us a ton without the surprise bill.
Thanks for that! Yeah, the Budgets + SNS/Lambda combo sounds solid—way better than relying on forecast alerts. We're definitely adding actual cost thresholds this week. Did you set a hard cutoff or just a warning first?
Also consider tagging everything religently and using Cost Anomaly Detection—it catches weird spend patterns way faster than manual budgets. We tag by team/project, then set anomaly alerts per tag. Caught a runaway RDS instance that way before it got bad. https://docs.aws.amazon.com/ has the setup guide. And honestly, on-demand without reserved instances should come with guardrails (like instance count limits in your ASG or quota restrictions) or you're just asking for trouble.