Skip to main content
Energy Grid → Question 1

Where's the Breaking Point?

We swept data center load from 0 to 30 GW in 1 GW increments and measured when blackout hours exceed the industry reliability standard (LOLE: 0.1 events/year, roughly 9 hours).

Blackout Hours vs. Data Center Load (GW)
Finding
In the best weather year (2024), ERCOT's current grid can absorb about 3–4 GW of data center baseload before exceeding the reliability standard. In the worst year, only 2–3 GW. That's less than 2% of the 226 GW interconnection queue.

The solid lines show 2024 (the best year in our dataset). The dashed lines show the worst weather year across 2015–2024. The gap between them is the risk that single-year analysis hides — and the reason Monte Carlo matters for grid planning.

At 70% and 90% renewables, the grid already exceeds reliability standards with zero data centers in bad weather years. The problem isn't just data centers — it's the combination of high renewables, variable weather, and insufficient dispatchable backup.

Model: hourly merit-order dispatch, 10 weather years (2015–2024), demand-normalized to 2024, per-2 GW resolution. Solid = 2024, dashed = worst year.

With forced outages, the tipping point shifts to zero. The deterministic model put the tipping point at 4–6 GW. Run it stochastically (200 draws, 7–10% forced outage rates, ±3% demand noise) and the 70% RE grid is already unreliable with no data centers at all — 55 mean blackout hours, P90 of 115. At that point you're no longer asking how many data centers the grid can handle. You're asking whether there's enough dispatchable gas to keep the lights on, period.

Same grid, same weather — different answer. The deterministic model greenlit 4–6 GW of data center load. The stochastic model says the 70% RE grid fails at 0 GW. The only variable that changed is whether you include forced outages. That single modeling choice is the difference between approving billions in interconnection and pulling the emergency brake.

Sensitivity Analysis

What Drives the Answer?

Across eight investigations, we varied dozens of parameters. Not all of them matter equally. This table summarizes which inputs drive the key findings and which ones don't — the core output of Analysis Driven Modeling.

Parameter Range Tested Effect on Breaking Point Decision Impact
Weather year 2015–2024 (10 years) 2–3 GW (worst year) to 4–6 GW (best) Determines if grid survives
Forced outage rate 7–10% (stochastic, 200 draws) Shifts breaking point to 0 GW at 70% RE; adds ~15 GW to gas requirement Makes or breaks reliability
RE penetration Current / 70% / 90% Higher RE = more volatile; 70% RE grid unreliable at 0 GW DC in worst year Determines investment path
DC load flexibility 0–50% flexible; 3–10% DR 5% DR doubles grid capacity (4 GW → 8 GW); 30% flex cuts blackouts by ⅓ Reframes DC as asset, not just load
Transmission limits Single-node / 2-zone / 6-zone 67h → 528h blackout for same 10 GW scenario (8× increase) Spatial fidelity is critical
Gas-electric coupling 0.5%–5% coupling strength At 3%+ coupling, any shock >20% cascades to total collapse Changes decision from “weatherize” to “redesign”
Dispatch intelligence Heuristic / 6h forecast / perfect foresight 6h forecast captures 97% of optimal value; minor capacity effect Operational improvement, doesn't change capacity needs
Sensitivity Summary
Three parameters dominate: weather variability, forced outage rate, and transmission topology. Together they can shift the “safe” data center load from 6 GW down to zero and multiply blackout hours by 8×. Load flexibility is a powerful mitigant that costs nothing to build. Dispatch intelligence and gas price are secondary — they affect operations, not the fundamental capacity question.

Sensitivity ranges reflect actual parameter sweeps from Q1–Q8. See individual investigation pages for detailed results.