GCP’s pricing model is different from AWS and Azure in ways that directly affect your cost optimization strategy. Sustained Use Discounts apply automatically. Committed Use Discounts don’t require upfront payment. BigQuery charges per query by default — and that model can be either very cheap or very expensive depending on how your analysts write SQL.
This guide covers the GCP-specific optimizations that make the biggest impact, with real pricing numbers and a practical workflow for teams spending $5,000–$50,000/month on Google Cloud.
Sustained Use Discounts: The Discount You’re Already Getting
GCP automatically applies Sustained Use Discounts (SUDs) to Compute Engine VMs that run for more than 25% of a billing month. No commitment, no action required. The discount increases the longer an instance runs.
How SUDs scale for N1 VMs:
| Usage in Month | Effective Discount |
|---|---|
| 0–25% | 0% (full on-demand) |
| 25–50% | ~20% on incremental usage |
| 50–75% | ~40% on incremental usage |
| 75–100% | ~60% on incremental usage |
For an n1-standard-4 running the entire month in us-central1, the on-demand price is $0.1900/hr ($138.70/month). With SUDs applied automatically, your effective cost drops to approximately $97/month — a 30% discount without doing anything.
The catch: SUDs only apply to N1 and N2 general-purpose VMs. E2, T2D, and custom machine types either get reduced SUDs or none at all. N2D and C2 instances get SUDs at different rates. Check the GCP pricing calculator for your specific machine type.
Key difference from AWS/Azure: Neither AWS nor Azure offers automatic usage-based discounts. You get on-demand pricing until you explicitly purchase Reserved Instances or Savings Plans. GCP’s SUDs mean your baseline cost is already lower before you start optimizing.
Committed Use Discounts: When SUDs Aren’t Enough
If SUDs give you 30%, Committed Use Discounts (CUDs) go further — 37% for 1-year and 55% for 3-year commitments on most machine types.
n1-standard-4 pricing comparison (us-central1):
| Pricing Model | Hourly Rate | Monthly Cost | Savings vs On-Demand |
|---|---|---|---|
| On-demand | $0.1900 | $138.70 | — |
| With SUDs (full month) | ~$0.1330 | ~$97.00 | 30% |
| 1-year CUD | $0.1197 | $87.40 | 37% |
| 3-year CUD | $0.0855 | $62.40 | 55% |
Unlike AWS Reserved Instances, GCP CUDs don’t require upfront payment. You commit to a minimum level of usage (measured in vCPUs and memory) and pay monthly. This makes CUDs lower risk — no large upfront capital expenditure.
CUDs are resource-based, not instance-based. You commit to a number of vCPUs and GB of memory in a region. GCP applies the discount across any eligible VM that uses those resources. This means you can change machine types, resize instances, or redistribute workloads without losing your discount — as long as total vCPU and memory usage in that region meets your commitment.
How to size your CUD commitment:
# Export your Compute Engine usage for the last 90 days
# from billing export in BigQuery, then:
SELECT
DATE_TRUNC(usage_start_time, MONTH) as month,
sku.description,
SUM(usage.amount) as total_usage,
usage.unit
FROM `project.dataset.gcp_billing_export_v1_XXXXXX`
WHERE service.description = 'Compute Engine'
AND sku.description LIKE '%Core%'
AND usage_start_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY)
GROUP BY 1, 2, 4
ORDER BY 1, 3 DESC
Look at your minimum monthly vCPU and memory usage. Commit to 70–80% of that floor. Let SUDs cover the variable portion above your commitment.
Preemptible and Spot VMs: 60–91% Savings With Real Trade-offs
GCP Spot VMs (the successor to Preemptible VMs) offer 60–91% discounts versus on-demand pricing. An n1-standard-4 that costs $0.19/hr on-demand runs at $0.04/hr as a Spot VM — that’s a 79% reduction.
The trade-off: GCP can reclaim Spot VMs at any time with a 30-second warning. Your workload needs to handle that gracefully.
Where Spot VMs work well:
-
CI/CD runners. Jenkins agents, GitHub Actions self-hosted runners, GitLab runners. A preempted build just restarts. If you’re running 10 n1-standard-8 build agents at $0.38/hr each on-demand, switching to Spot drops your cost from $2,774/month to ~$583/month.
-
Batch processing. ETL jobs, video transcoding, image processing. Break work into small units, checkpoint progress, retry failed units. Most batch frameworks (Apache Beam, Dataflow, custom Kubernetes jobs) handle this natively.
-
Data pipeline workers. Spark executors on Dataproc, Flink task managers. These frameworks redistribute work when nodes disappear. Run the driver/master on a standard VM, workers on Spot.
-
Distributed training. ML training jobs with periodic checkpointing. A preemption costs you the work since the last checkpoint, not the entire training run.
Where Spot VMs backfire:
-
Stateful workloads without fast recovery. A database server on a Spot VM is a bad idea regardless of savings. Even with replication, failover introduces downtime and potential data inconsistency.
-
Low fault tolerance services. If your service can’t handle losing a node without user-visible impact, Spot VMs create reliability risk that outweighs cost savings. A single API server on Spot behind a load balancer is fine if you have 10 instances. It’s a problem if you have 2.
-
Workloads with long startup times. If your application takes 15 minutes to initialize (loading large models, warming caches, building in-memory indexes), frequent preemptions mean you spend more time starting up than doing useful work.
-
Sustained high-demand periods. Spot VM availability varies by zone and machine type. During peak demand, GCP reclaims Spot VMs more aggressively. If your workload needs guaranteed capacity during specific windows (month-end processing, daily batch at 2am), Spot alone isn’t reliable.
Practical Spot VM pattern on GKE:
# Node pool with Spot VMs for batch workloads
gcloud container node-pools create batch-spot \
--cluster=my-cluster \
--machine-type=n1-standard-8 \
--spot \
--enable-autoscaling \
--min-nodes=0 \
--max-nodes=20 \
--node-taints=cloud.google.com/gke-spot=true:NoSchedule
Then schedule batch workloads onto Spot nodes using tolerations and node affinity. Keep your stateful services on a separate standard node pool.
BigQuery Cost Control: The Biggest GCP-Specific Optimization
BigQuery is where GCP bills surprise people. On-demand pricing charges $6.25 per TB scanned — and a single poorly written query against a multi-TB table can cost $20+ in seconds.
On-demand vs capacity pricing:
| Model | Cost | Best For |
|---|---|---|
| On-demand | $6.25/TB scanned | Sporadic queries, small datasets |
| Standard edition | $0.04/slot-hour (autoscaling) | Predictable workloads with variable query volume |
| Enterprise edition | $0.06/slot-hour (with advanced features) | Governance, security, cross-region queries |
One slot provides approximately 1 unit of parallel compute. A 100-slot reservation costs $0.04 × 100 = $4.00/hr = $2,920/month. Whether that’s cheaper than on-demand depends entirely on your query volume.
Rule of thumb: If your team scans more than 15 TB/month consistently, capacity pricing almost always wins.
Real-World Example: $3,800/Month to $1,100/Month
An analytics team was running $3,800/month in BigQuery on-demand pricing. Their data warehouse had 8 TB of event data, and analysts were running 40–60 queries per day across dashboards, ad-hoc analysis, and scheduled reports.
Here’s what was happening:
-
No partition pruning. The main events table was partitioned by
event_date, but most queries usedWHERE timestamp > '2026-01-01'instead ofWHERE event_date > '2026-01-01'. BigQuery couldn’t use the partition filter, so every query scanned the entire table. -
SELECT * everywhere. Dashboard queries selected all 47 columns to display 6 of them. BigQuery is columnar — scanning unused columns costs real money.
-
No materialized views. Five dashboards ran the same aggregation query every time they loaded. Each dashboard load scanned 200 GB.
Fix 1: Partition pruning. Changed queries to filter on the partition column directly.
-- Before: scans entire table (~8 TB)
SELECT * FROM events WHERE timestamp > '2026-01-01'
-- After: scans only matching partitions (~800 GB)
SELECT * FROM events WHERE event_date > '2026-01-01'
Fix 2: Column pruning. Replaced SELECT * with explicit column lists.
-- Before: scans all 47 columns
SELECT * FROM events WHERE event_date = '2026-03-18'
-- After: scans only 6 columns needed
SELECT event_id, user_id, event_type, event_date, properties, source
FROM events WHERE event_date = '2026-03-18'
Fix 3: Materialized views for repeated aggregations.
CREATE MATERIALIZED VIEW mv_daily_event_counts
AS SELECT
event_date,
event_type,
COUNT(*) as event_count,
COUNT(DISTINCT user_id) as unique_users
FROM events
GROUP BY event_date, event_type
Dashboard queries against the materialized view scan megabytes instead of hundreds of gigabytes. BigQuery refreshes materialized views automatically when base tables change.
Fix 4: Switched to Standard edition. After reducing query volume through the above fixes, the remaining workload fit comfortably in 50 slots.
Result breakdown:
| Change | Before | After | Monthly Savings |
|---|---|---|---|
| Partition pruning | ~400 TB scanned/mo | ~50 TB scanned/mo | $2,187 |
| Column pruning | (included above) | (included above) | — |
| Materialized views | ~5 TB repeated/day | ~5 GB/day | ~$900 |
| Capacity pricing | $6.25/TB on-demand | 50 slots Standard | Additional savings |
| Total | $3,800/month | $1,100/month | $2,700/month |
The partition pruning fix alone was worth $2,187/month. It took 20 minutes to identify and 2 hours to update the affected queries.
BigQuery Cost Controls to Set Immediately
Before optimizing queries, set these guardrails:
# Set a maximum bytes billed per query (1 TB limit)
bq query --maximum_bytes_billed=1000000000000 \
"SELECT * FROM my_dataset.my_table"
# Set project-level custom quota
# In Cloud Console: IAM & Admin → Quotas
# Filter: "BigQuery API - Query usage per day per user"
# Set to a reasonable limit (e.g., 10 TB/day per user)
Also enable the require_partition_filter option on large partitioned tables:
ALTER TABLE my_dataset.events
SET OPTIONS (require_partition_filter = true)
This prevents any query from running against the table without a partition filter — eliminating accidental full-table scans entirely.
Setting Up GCP Billing Export to BigQuery
You can’t optimize what you can’t measure. GCP’s billing export to BigQuery gives you granular cost data you can query directly.
Setup (one-time, takes 5 minutes):
- Go to Billing → Billing Export in Cloud Console
- Select your billing account
- Under “BigQuery Export,” click “Edit Settings”
- Choose a project and dataset (create a dedicated one:
billing_export) - Enable both “Standard usage cost” and “Detailed usage cost”
Data starts flowing within a few hours. Historical data for the current month backfills automatically.
Useful queries once export is live:
-- Top 10 most expensive services this month
SELECT
service.description AS service,
ROUND(SUM(cost) + SUM(IFNULL((
SELECT SUM(c.amount) FROM UNNEST(credits) c
), 0)), 2) AS net_cost
FROM `project.billing_export.gcp_billing_export_v1_XXXXXX`
WHERE invoice.month = '202603'
GROUP BY 1
ORDER BY 2 DESC
LIMIT 10;
-- Daily spend trend for Compute Engine
SELECT
DATE(usage_start_time) AS date,
ROUND(SUM(cost), 2) AS daily_cost
FROM `project.billing_export.gcp_billing_export_v1_XXXXXX`
WHERE service.description = 'Compute Engine'
AND usage_start_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 30 DAY)
GROUP BY 1
ORDER BY 1;
-- Find resources with cost spikes (day-over-day change > 50%)
WITH daily AS (
SELECT
DATE(usage_start_time) AS date,
project.id AS project_id,
service.description AS service,
ROUND(SUM(cost), 2) AS cost
FROM `project.billing_export.gcp_billing_export_v1_XXXXXX`
WHERE usage_start_time >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 14 DAY)
GROUP BY 1, 2, 3
)
SELECT
d1.date,
d1.project_id,
d1.service,
d1.cost AS today_cost,
d2.cost AS yesterday_cost,
ROUND((d1.cost - d2.cost) / NULLIF(d2.cost, 0) * 100, 1) AS pct_change
FROM daily d1
JOIN daily d2 ON d1.project_id = d2.project_id
AND d1.service = d2.service
AND d1.date = DATE_ADD(d2.date, INTERVAL 1 DAY)
WHERE d1.cost > 10
AND (d1.cost - d2.cost) / NULLIF(d2.cost, 0) > 0.5
ORDER BY d1.cost - d2.cost DESC;
Build a Looker Studio dashboard on top of this dataset and review it weekly. This single habit catches cost anomalies before they compound.
Putting It All Together: Priority Order
If you’re spending $10,000+/month on GCP and haven’t done a cost optimization pass, here’s where to start:
-
Enable billing export to BigQuery. Takes 5 minutes, pays for itself immediately with visibility.
-
Audit BigQuery usage. If you’re on on-demand pricing, check your total bytes scanned last month. Enable
require_partition_filteron large tables. FixSELECT *queries. This is usually the highest-ROI fix on GCP. -
Review Compute Engine recommendations. Go to Compute Engine → VM instances → Recommendations column. GCP surfaces right-sizing suggestions directly in the console.
-
Size CUD commitments. Look at your 90-day Compute Engine baseline. Commit to 70% of the floor.
-
Identify Spot VM candidates. Any batch processing, CI/CD, or stateless workload with fault tolerance built in.
-
Set budget alerts. Billing → Budgets & Alerts. Set alerts at 50%, 80%, and 100% of expected spend. Connect them to Pub/Sub for programmatic responses (auto-scaling down, pausing non-critical jobs).
The combination of these steps typically reduces GCP spend by 25–40% for teams that haven’t previously optimized. For teams already using SUDs and basic right-sizing, BigQuery optimization and CUD commitments usually unlock an additional 15–20%.
If you’re managing GCP alongside AWS or Azure, correlating cost data across all three clouds in a single view makes optimization significantly more efficient. Tools like Xplorr normalize billing data across providers so you’re not switching between three different consoles and three different pricing models to understand your total cloud spend.
But regardless of tooling, the fundamentals here work. Measure first, fix BigQuery, right-size compute, commit to your baseline. Those four steps cover the majority of GCP cost savings for most teams.
Keep reading
- GCP vs Azure vs AWS: Real Cost Comparison for 2026
- Why Multi-Cloud Cost Visibility Is Broken
- Cloud Cost Optimization for Startups: From $18K to $7K/Month
See how Xplorr helps → Features
Xplorr finds an average of 23% in unnecessary cloud spend. Get started free.
Share this article