Most cloud bill spikes don’t happen all at once. They build gradually, through a series of small decisions that each seem reasonable in isolation but compound into a nasty invoice surprise. Here are the five warning signs we see most often — and what to do about each one.
Sign 1: Dev Environments Running 24/7 With No Schedule
Development and staging environments are supposed to help engineers build faster. They’re not supposed to run around the clock, including weekends, holidays, and the two weeks nobody uses them.
A mid-size dev environment — a few EC2 instances, an RDS database, a load balancer — might cost $600–$1,200/month running continuously. The same environment running 9am–7pm weekdays costs 70% less: $180–$360/month.
The warning sign: You can log into your cloud console at 3am on a Saturday and every dev environment is running. Multiply those instance-hours by 12 months and you’ll have the magnitude of the problem.
What to do: Implement instance scheduling. AWS has a native Instance Scheduler. Azure Automation can stop and start VMs on a schedule. GCP has scheduled instance groups. This is a half-day implementation that pays for itself within weeks.
Sign 2: Autoscaling Without Maximum Limits
Autoscaling is one of the best things about cloud computing. It’s also one of the fastest ways to get a five-figure invoice if you configure it wrong.
The standard mistake: setting a minimum of 2 instances and no maximum (or a maximum so high it’s effectively unlimited). Under normal conditions, you run 3–5 instances. Then traffic spikes — legitimately, or due to a bug, a scraper, or a DDoS — and autoscaling adds 50, 100, 200 instances before anyone notices.
The warning sign: Autoscaling groups with no maximum set, or maximums set to numbers that would create obvious financial pain if fully utilised.
What to do: Audit every autoscaling group and set a maximum that reflects realistic upper-bound traffic. Then configure a CloudWatch alarm (or equivalent) that alerts when instance count exceeds 80% of maximum. A Slack notification when you’re near capacity is much better than an invoice notification after the fact.
Sign 3: No Budget Alerts Set
This one should be obvious, but AWS reports that fewer than 40% of accounts have billing alerts configured. The same is likely true on Azure and GCP.
Budget alerts don’t stop spending. But they create the minimum feedback loop needed to act before a monthly invoice lands. Without them, you’re flying blind until the end of the month.
The warning sign: Open your cloud billing console and check if you have active budget alerts configured. If the answer is no — or if you have alerts but they’re set to thresholds you’ve never actually hit — you have no early warning system.
What to do: Set budget alerts at 50%, 80%, and 100% of your expected monthly spend on every active cloud account. Route them to a Slack channel that at least one person monitors. This takes 10 minutes and costs nothing.
Sign 4: Forgotten Snapshots and Abandoned Volumes
EBS volumes, Azure Managed Disks, and GCP Persistent Disks don’t disappear when you delete the instance they were attached to — at least not by default.
Every one of those orphaned volumes is billing you for storage, indefinitely, until someone explicitly deletes it. Same with snapshots: they accumulate silently in the background, each one incurring a small monthly charge that adds up over years.
A company running AWS for 3+ years might have hundreds of gigabytes of snapshots from instances that no longer exist. At $0.05/GB, a terabyte of snapshots is $50/month — unremarkable individually, but pure waste.
The warning sign: Go to your cloud console and look at unattached storage volumes (filter by “available” state in AWS). Then look at snapshots and sort by age. If you have snapshots older than 6 months with no obvious reference, you have orphaned storage.
What to do: Set up a lifecycle policy for snapshots (delete after X days unless flagged as permanent). Delete unattached volumes after verifying no one needs them. Do this quarterly at minimum.
Sign 5: Data Transfer You Didn’t Model
Data transfer fees are the most common source of unexpected cloud bills. The charges are small per GB, so they don’t look alarming in isolation — until you multiply by the actual gigabytes flowing through your system.
The most common sources of surprise data transfer spend:
- Cross-region replication: Replicating data from us-east-1 to eu-west-1 costs $0.02/GB. A 10TB replicated database costs $200/month in transfer fees, before any storage costs.
- Cross-AZ traffic: Within a region, data crossing availability zones costs $0.01/GB. Microservice architectures can generate substantial cross-AZ traffic invisibly.
- Third-party SaaS integration: Sending data to Datadog, Splunk, or similar observability tools involves egress. At scale, this can exceed the cost of the tool itself.
- NAT Gateway fees: $0.045/GB through NAT Gateway in us-east-1. High-traffic private subnets can generate surprising NAT costs.
The warning sign: Pull your AWS Cost Explorer (or equivalent) and look at the “Data Transfer” line item. If it’s more than 10% of your total bill and you’ve never explicitly modelled it, you have unexamined transfer spend.
What to do: Use VPC Endpoints to eliminate NAT Gateway fees for AWS service traffic (S3, DynamoDB). Review cross-region replication necessity. Tag data transfer resources so you can attribute the cost to the service generating it.
These five patterns are consistent enough that we built anomaly detection into Xplorr specifically to catch them. When your spend in any category spikes beyond its 7-day rolling average, you get an alert — before the invoice lands, not after.
The common thread across all five is the same: the cost is incurred incrementally, nobody’s actively watching, and the pain only becomes visible at invoice time. The fix in every case is the same too: visibility, automation, and alerts.
Xplorr monitors for these patterns automatically across AWS, Azure, and GCP. Request beta access — free for early teams.
Share this article
Ready to control your cloud costs?
Join early teams getting real visibility into their AWS, Azure, and GCP spend.
Request Beta Access