What Are Insurers Missing About Wildfire?
California just approved catastrophe models for insurance rate-setting. Insurers are either fleeing fire-prone areas or adopting models they don't understand. We asked six questions that underwriters actually face — each answer built on the last.
316 properties across three real fires. The underwriting answer depends on whether you model the parcel or the zone.
The Question California Can’t Answer
In 2023, California Department of Insurance approved the use of catastrophe models for insurance rate-setting — a major policy shift after years of rate suppression that drove carriers out of the state. State Farm, Allstate, and Farmers had already stopped writing new policies in high-risk areas.
But approving cat models raises the real question: at what fidelity does the underwriting decision actually change? Zone-based classification (the baseline) lumps entire geographic areas together. Higher-fidelity models run physics-based fire spread simulations on real terrain with real weather. Both are legal. Both are used. The gap between them isn’t cosmetic — it changes which properties are insurable, at what price, and whether an insurer's portfolio is actually solvent under a bad fire season.
This study measures that gap directly. Three real California fires. Four fidelity levels. The same 316 properties analyzed at each level.
Four Fidelity Levels, Four Different Answers
The same 316 properties. The only thing that changes is how hard the model looks at the problem.
| Level | Method | EAL (Portfolio) | Key Limitation |
|---|---|---|---|
| L0 — Zone-Based | CAL FIRE FHSZ map zones | $54M | No property differentiation within zone |
| L1 — Deterministic Spread | Single Rothermel CA run, observed weather | $78M | One weather realization; no uncertainty |
| L2 — Monte Carlo | 200 draws, perturbed weather & ignition | $112M | Independent fires; no season correlation |
| L3 — Correlated Season | Fire occurrence + severity share weather regime | $127M | Season parameters assumed, not calibrated |
Rothermel Cellular Automaton
The fire spread model is a cellular automaton (CA) driven by a simplified Rothermel spread rate equation. Each cell burns based on the fire behavior of its neighbors, local fuel type, terrain slope, and wind conditions.
100m cells, 5km domain per fire
73–408 steps per fire depending on duration
Grass, shrub, timber, urban, water, firebreak
Spotting, crown fire transition, moisture dynamics
This is deliberately simplified from the full Rothermel model. We omit fuel moisture extinction, packing ratio, and mineral damping. The tradeoff: faster computation at the cost of fine-grained accuracy. Sensitivity analysis (Investigation 05) quantifies which simplifications matter.
Monte Carlo Design (L2)
The L2 fidelity level runs 200 Monte Carlo draws per ignition scenario. Each draw perturbs the weather and ignition conditions to sample the range of plausible fire outcomes.
Per ignition scenario, sufficient for stable burn probability estimates
Uniform perturbation around observed hourly values
Uniform perturbation around observed heading
Affects fuel moisture and spread rate
Property Model
The fire model operates on a 50×50 grid (100m cells). Properties are identified from urban fuel type cells (Landfire classification). Mean exposure is $889K per property, lognormal distribution calibrated to CA FAIR Plan statistics. Fire model results are mapped from the 50×50 fire grid to the 100×100 property grid.
Exposure values are aggregate calibrations, not address-level. No geocoded property records were used.
Six Investigations, One Chain
Each answer enabled the next question. One simulation, six investigations — each building on the last.
When Does the Risk Tier Change?
Zone-based classification lumps 96% of properties as HIGH. Monte Carlo reveals 86% EXTREME, 9% HIGH, 6% lower — a 91% reclassification rate.
Does Fidelity Change the Portfolio Answer?
Expected Annual Loss more than doubles from $54M (zone-based) to $127M (Monte Carlo). 272 properties underpay under zone-based pricing.
What’s the Correlated Tail Risk?
Portfolio P99 loss is 1.43x higher when fire occurrence and severity share weather regimes. Correlation barely moves the median but fattens the tail.
Where Does the Model Fail?
Kincade: 100% sensitivity. Marshall Fire: 12%. Rothermel models miss 88% of WUI conflagration damage. The honest boundary is structure-to-structure ignition.
What Drives the Answer?
Damage ratio dominates everything — changing it swings EAL by 640–1380%. After that, humidity is the strongest driver, shifting classification by 21 percentage points. The burn probability threshold changes EAL by less than 1%.
Does It Hold Across Fires?
Kincade: 91% reclassified, 133% EAL divergence. Camp Fire: 88% reclassified, 128% divergence. The fidelity effect is consistent across radically different fire outcomes.
What the Models Taught Us
The fidelity effect is not subtle and it is not fire-specific. Across both the Kincade Fire and Camp Fire — radically different fire outcomes — the same pattern holds:
| Fire | Reclassification Rate | EAL Divergence (L0 vs. L3) | Properties Underpaying |
|---|---|---|---|
| Kincade Fire | 91% | 133% | ~272 of 316 |
| Camp Fire | 88% | 128% | ~265 of 316 |
Zone-Based Pricing Hides Risk
Treating all WUI properties the same masks a 2x difference in expected loss between the most and least exposed. The average rate is wrong for almost every individual property.
The Portfolio Answer Changes
Fidelity matters more at the portfolio level than the property level. L0 understates aggregate exposure by 133%. An insurer priced to L0 is technically insolvent at L3 risk.
Correlation Is Invisible Until It Isn’t
Single-fire models capture the mean. But the 1-in-100 year season is 43% worse when fires share weather. That tail is what drives reinsurance pricing — and it’s invisible at L2.
Know Your Model’s Boundary
Rothermel works for wildland fire approaching WUI. It fails for structure-to-structure ignition. Both answers are useful — but only if you know which fire type you’re pricing.
What Drives the Answer?
Three inputs matter. One widely-varied parameter has almost no effect.
Dominates everything. The assumed fraction of property value destroyed when a cell burns. Changing it swings EAL by 640–1380%. This is the parameter commercial cat models spend the most effort calibrating.
The strongest weather driver. Humidity affects fuel moisture and spread rate directly. Changing it shifts risk classification by 21 percentage points — enough to move a property between tiers.
Drives fire spread direction and rate. Captured in the MC perturbation (±30% speed, ±20° direction). Most of the stochastic variance in burn footprint comes from wind variability.
The cutoff at which a property is considered "burned" based on its MC burn probability. Varying this threshold changes EAL by less than 1%. The model is insensitive to where exactly you draw this line because the burn probability distribution is bimodal — properties are either usually burned or rarely burned, with few in between.
The damage ratio is the single most important modeling decision in wildfire insurance. It’s also the least transparent — most commercial cat models treat it as proprietary. This study uses a public calibration from CA FAIR Plan statistics, which is why we state it explicitly and test its sensitivity.
How We Checked the Results
Validation is layered: fire model physics validated against observed perimeters, risk classifications validated against actual burn outcomes, portfolio losses calibrated to industry data, and cross-fire consistency confirmed.
- Fire perimeter validation. Model fire footprint compared against NIFC perimeters using Intersection-over-Union (IoU), sensitivity (burned cells detected), and specificity (unburned cells correctly identified).
- Risk classification validation. Properties that actually burned checked against their risk tier assignment. All four fidelity levels achieve 99–100% sensitivity for wildland fires — every property that burned was classified as high-risk at every fidelity level.
- Portfolio loss calibration. Expected Annual Loss estimates compared to CA FAIR Plan loss ratios and CDI reported claim volumes. Order-of-magnitude agreement confirmed. Note: absolute EAL values differ from industry rates by ~10x (see Limitations); the L0-vs-L3 relative comparisons are valid.
- Cross-fire consistency. Identical pipeline run on Kincade and Camp Fire. Reclassification rates (88–91%) and EAL divergence (128–133%) are stable across fires with radically different outcomes — different terrain, different weather, different damage extents.
- Honest boundary demonstration. Marshall Fire used as a deliberate failure case. The model achieves only 12–14% sensitivity — confirming that Rothermel CA cannot capture structure-to-structure WUI conflagration. This is not a bug; it is the correct answer to a different question.
What This Model Cannot Do
Every model has boundaries. These are ours — stated explicitly so no one mistakes a 5km demonstration for a production catastrophe model.
- 5km domain. Each fire covers one neighborhood, not a full portfolio footprint. Scaling to county or state level would require thousands of domains and real address-level exposure data.
- No structure ignition model. The fire model tracks wildland spread only. It cannot model ember intrusion, radiant heat from neighboring structures, or the structure-to-structure ignition cascade that drives WUI conflagration. This is why Marshall Fire fails (Investigation 04).
- No defensible space data. Defensible space (vegetation clearance around structures) is modeled as a crude urban-neighbor proxy, not from parcel-level surveys or aerial imagery.
- 100m resolution. Commercial catastrophe models operate at 1–30m resolution. Our 100m cells cannot distinguish individual structures, driveways, or vegetation patterns within a cell.
- Rate-based EAL vs. simulation-based EAL. Our Expected Annual Loss uses conditional burn probabilities from the fire simulation, not actuarial frequency data. The resulting EAL values differ from industry loss rates by approximately 10x. The relative comparisons (L0 vs. L3) are valid; the absolute dollar amounts are not production-ready.
- Season model parameters are assumed. Fire occurrence rate and weather amplification factors in the correlated season model (Investigation 03) are assumed, not validated against historical season-level data. These affect absolute tail risk estimates but not the correlation effect itself.
This is a methodology demonstration, not a production model. The goal is to show that fidelity changes the underwriting decision — not to produce rate-ready loss estimates. A production version would require address-level exposure, structure vulnerability curves, and validation against historical claim data.
What the Analysis Supports
Five conclusions drawn from the model results across four fires. Each traces to a specific finding.
-
01
Don’t use zone-based classification for individual underwriting decisions
Zone-based classification misclassifies 91% of properties and understates portfolio EAL by 2.4x. It may be appropriate for coarse portfolio screening but not for individual policy pricing. Carriers who adopted cat models for rate filings but left zone-based classification in their underwriting workflow have a fidelity mismatch.
-
02
Run Monte Carlo, not deterministic, for portfolio pricing
The jump from L1 (deterministic, $78M EAL) to L2 (Monte Carlo, $112M EAL) is 44%. Deterministic models give a single fire footprint from observed weather. The Monte Carlo captures the range of plausible outcomes from the same ignition — which is what matters for pricing. The compute cost is low; the pricing error from skipping it is not.
-
03
Use the correlated season model for reinsurance pricing
Portfolio P99 loss is 43% higher when fire occurrence and severity share weather regimes (L3 vs. L2). Reinsurance treaties are priced to tail events — the 1-in-100-year season is the number that sets treaty attachment points. A model that treats each fire as independent systematically underprices that risk.
-
04
Draw explicit model boundaries for WUI conflagration
Rothermel achieves 100% sensitivity for the Kincade Fire (wildland spread) and 12% for the Marshall Fire (structure-to-structure WUI conflagration). These are not both “wildfire” — they are different physics. Carriers underwriting both types with the same model are pricing one correctly and the other nearly blind.
-
05
Focus calibration effort on damage ratios, not fire spread parameters
Sensitivity analysis shows the damage ratio drives 640–1380% of EAL variance. Wind and humidity matter. The burn probability threshold doesn’t. Most commercial cat model validation focuses on fire spread accuracy (perimeter matching); this study suggests the larger pricing error comes from damage ratio uncertainty, not fire physics.
Explore the Data
Test the assumptions behind the insurance analysis.
Sources
No synthetic data. Every input to the fire model, property model, and portfolio model is sourced from public federal or state datasets.
- 1USGS 3DEP / SRTM — Terrain elevation (100×100 grid, 200m/cell, 3 CA fire domains)
- 2NLCD 2021 / Landfire — Fuel types (6 categories: grass, shrub, timber, urban, water, firebreak)
- 3ASOS via Iowa Environmental Mesonet — Hourly weather (temperature, humidity, wind speed/direction, 73–408 hours per fire)
- 4NIFC ArcGIS — Fire perimeters (GeoJSON polygons for Kincade, Camp Fire, Marshall)
- 5NASA FIRMS VIIRS — Satellite fire detections (lat/lon, fire radiative power, confidence)
- 6CA FAIR Plan / CDI 2023 Annual Report — Insurance exposure statistics and loss ratios (statewide aggregate)
- 7NIST, Colorado DOI — Marshall Fire structure counts and insured loss totals
6 investigations · 3 real California fires (Kincade, Camp, Marshall) · 316 properties · 200 Monte Carlo draws · 4 fidelity levels · validated against FAIR Plan statistics and NIST post-fire investigations · all inputs from public sources.