Skip to content
Studies · Case Study

What Are Insurers Missing About Wildfire?

California just approved catastrophe models for insurance rate-setting. Insurers are either fleeing fire-prone areas or adopting models they don't understand. We asked six questions that underwriters actually face — each answer built on the last.

316 properties across three real fires. The underwriting answer depends on whether you model the parcel or the zone.

3 Fires
Real California Wildfires
316
Properties
200
MC Draws
6
Investigations
The Decision

The Question California Can’t Answer

In 2023, California Department of Insurance approved the use of catastrophe models for insurance rate-setting — a major policy shift after years of rate suppression that drove carriers out of the state. State Farm, Allstate, and Farmers had already stopped writing new policies in high-risk areas.

But approving cat models raises the real question: at what fidelity does the underwriting decision actually change? Zone-based classification (the baseline) lumps entire geographic areas together. Higher-fidelity models run physics-based fire spread simulations on real terrain with real weather. Both are legal. Both are used. The gap between them isn’t cosmetic — it changes which properties are insurable, at what price, and whether an insurer's portfolio is actually solvent under a bad fire season.

This study measures that gap directly. Three real California fires. Four fidelity levels. The same 316 properties analyzed at each level.

The Models

Four Fidelity Levels, Four Different Answers

The same 316 properties. The only thing that changes is how hard the model looks at the problem.

Level Method EAL (Portfolio) Key Limitation
L0 — Zone-Based CAL FIRE FHSZ map zones $54M No property differentiation within zone
L1 — Deterministic Spread Single Rothermel CA run, observed weather $78M One weather realization; no uncertainty
L2 — Monte Carlo 200 draws, perturbed weather & ignition $112M Independent fires; no season correlation
L3 — Correlated Season Fire occurrence + severity share weather regime $127M Season parameters assumed, not calibrated

Rothermel Cellular Automaton

The fire spread model is a cellular automaton (CA) driven by a simplified Rothermel spread rate equation. Each cell burns based on the fire behavior of its neighbors, local fuel type, terrain slope, and wind conditions.

Grid Resolution
50 × 50

100m cells, 5km domain per fire

Timestep
1 hour

73–408 steps per fire depending on duration

Fuel Types
6

Grass, shrub, timber, urban, water, firebreak

Stochastic Elements
3

Spotting, crown fire transition, moisture dynamics

R = R₀(1 + φw + φs)
Simplified Rothermel spread rate. R₀ = base rate from fuel type. φw = wind factor. φs = slope factor.

This is deliberately simplified from the full Rothermel model. We omit fuel moisture extinction, packing ratio, and mineral damping. The tradeoff: faster computation at the cost of fine-grained accuracy. Sensitivity analysis (Investigation 05) quantifies which simplifications matter.

Monte Carlo Design (L2)

The L2 fidelity level runs 200 Monte Carlo draws per ignition scenario. Each draw perturbs the weather and ignition conditions to sample the range of plausible fire outcomes.

MC Draws
200

Per ignition scenario, sufficient for stable burn probability estimates

Wind Speed
±30%

Uniform perturbation around observed hourly values

Wind Direction
±20°

Uniform perturbation around observed heading

Humidity
±15%

Affects fuel moisture and spread rate

Property Model

The fire model operates on a 50×50 grid (100m cells). Properties are identified from urban fuel type cells (Landfire classification). Mean exposure is $889K per property, lognormal distribution calibrated to CA FAIR Plan statistics. Fire model results are mapped from the 50×50 fire grid to the 100×100 property grid.

Exposure values are aggregate calibrations, not address-level. No geocoded property records were used.

Investigations

Six Investigations, One Chain

Each answer enabled the next question. One simulation, six investigations — each building on the last.

Fidelity Lesson

What the Models Taught Us

The Core Finding
Zone-based pricing understates Expected Annual Loss by 2.4x ($54M vs. $127M) and misclassifies 91% of properties into the wrong risk tier.

The fidelity effect is not subtle and it is not fire-specific. Across both the Kincade Fire and Camp Fire — radically different fire outcomes — the same pattern holds:

Fire Reclassification Rate EAL Divergence (L0 vs. L3) Properties Underpaying
Kincade Fire 91% 133% ~272 of 316
Camp Fire 88% 128% ~265 of 316

Zone-Based Pricing Hides Risk

Treating all WUI properties the same masks a 2x difference in expected loss between the most and least exposed. The average rate is wrong for almost every individual property.

The Portfolio Answer Changes

Fidelity matters more at the portfolio level than the property level. L0 understates aggregate exposure by 133%. An insurer priced to L0 is technically insolvent at L3 risk.

Correlation Is Invisible Until It Isn’t

Single-fire models capture the mean. But the 1-in-100 year season is 43% worse when fires share weather. That tail is what drives reinsurance pricing — and it’s invisible at L2.

Know Your Model’s Boundary

Rothermel works for wildland fire approaching WUI. It fails for structure-to-structure ignition. Both answers are useful — but only if you know which fire type you’re pricing.

Sensitivity Analysis

What Drives the Answer?

Three inputs matter. One widely-varied parameter has almost no effect.

Damage Ratio Critical

Dominates everything. The assumed fraction of property value destroyed when a cell burns. Changing it swings EAL by 640–1380%. This is the parameter commercial cat models spend the most effort calibrating.

640–1380% EAL swing
Humidity High

The strongest weather driver. Humidity affects fuel moisture and spread rate directly. Changing it shifts risk classification by 21 percentage points — enough to move a property between tiers.

21 pp classification shift
Wind Speed & Direction High

Drives fire spread direction and rate. Captured in the MC perturbation (±30% speed, ±20° direction). Most of the stochastic variance in burn footprint comes from wind variability.

Primary source of footprint variance
Burn Probability Threshold Negligible

The cutoff at which a property is considered "burned" based on its MC burn probability. Varying this threshold changes EAL by less than 1%. The model is insensitive to where exactly you draw this line because the burn probability distribution is bimodal — properties are either usually burned or rarely burned, with few in between.

<1% EAL sensitivity

The damage ratio is the single most important modeling decision in wildfire insurance. It’s also the least transparent — most commercial cat models treat it as proprietary. This study uses a public calibration from CA FAIR Plan statistics, which is why we state it explicitly and test its sensitivity.

Validation

How We Checked the Results

Validation is layered: fire model physics validated against observed perimeters, risk classifications validated against actual burn outcomes, portfolio losses calibrated to industry data, and cross-fire consistency confirmed.

  1. Fire perimeter validation. Model fire footprint compared against NIFC perimeters using Intersection-over-Union (IoU), sensitivity (burned cells detected), and specificity (unburned cells correctly identified).
  2. Risk classification validation. Properties that actually burned checked against their risk tier assignment. All four fidelity levels achieve 99–100% sensitivity for wildland fires — every property that burned was classified as high-risk at every fidelity level.
  3. Portfolio loss calibration. Expected Annual Loss estimates compared to CA FAIR Plan loss ratios and CDI reported claim volumes. Order-of-magnitude agreement confirmed. Note: absolute EAL values differ from industry rates by ~10x (see Limitations); the L0-vs-L3 relative comparisons are valid.
  4. Cross-fire consistency. Identical pipeline run on Kincade and Camp Fire. Reclassification rates (88–91%) and EAL divergence (128–133%) are stable across fires with radically different outcomes — different terrain, different weather, different damage extents.
  5. Honest boundary demonstration. Marshall Fire used as a deliberate failure case. The model achieves only 12–14% sensitivity — confirming that Rothermel CA cannot capture structure-to-structure WUI conflagration. This is not a bug; it is the correct answer to a different question.
Limitations

What This Model Cannot Do

Every model has boundaries. These are ours — stated explicitly so no one mistakes a 5km demonstration for a production catastrophe model.

  1. 5km domain. Each fire covers one neighborhood, not a full portfolio footprint. Scaling to county or state level would require thousands of domains and real address-level exposure data.
  2. No structure ignition model. The fire model tracks wildland spread only. It cannot model ember intrusion, radiant heat from neighboring structures, or the structure-to-structure ignition cascade that drives WUI conflagration. This is why Marshall Fire fails (Investigation 04).
  3. No defensible space data. Defensible space (vegetation clearance around structures) is modeled as a crude urban-neighbor proxy, not from parcel-level surveys or aerial imagery.
  4. 100m resolution. Commercial catastrophe models operate at 1–30m resolution. Our 100m cells cannot distinguish individual structures, driveways, or vegetation patterns within a cell.
  5. Rate-based EAL vs. simulation-based EAL. Our Expected Annual Loss uses conditional burn probabilities from the fire simulation, not actuarial frequency data. The resulting EAL values differ from industry loss rates by approximately 10x. The relative comparisons (L0 vs. L3) are valid; the absolute dollar amounts are not production-ready.
  6. Season model parameters are assumed. Fire occurrence rate and weather amplification factors in the correlated season model (Investigation 03) are assumed, not validated against historical season-level data. These affect absolute tail risk estimates but not the correlation effect itself.

This is a methodology demonstration, not a production model. The goal is to show that fidelity changes the underwriting decision — not to produce rate-ready loss estimates. A production version would require address-level exposure, structure vulnerability curves, and validation against historical claim data.

Recommendations

What the Analysis Supports

Five conclusions drawn from the model results across four fires. Each traces to a specific finding.

  • 01

    Don’t use zone-based classification for individual underwriting decisions

    Zone-based classification misclassifies 91% of properties and understates portfolio EAL by 2.4x. It may be appropriate for coarse portfolio screening but not for individual policy pricing. Carriers who adopted cat models for rate filings but left zone-based classification in their underwriting workflow have a fidelity mismatch.

  • 02

    Run Monte Carlo, not deterministic, for portfolio pricing

    The jump from L1 (deterministic, $78M EAL) to L2 (Monte Carlo, $112M EAL) is 44%. Deterministic models give a single fire footprint from observed weather. The Monte Carlo captures the range of plausible outcomes from the same ignition — which is what matters for pricing. The compute cost is low; the pricing error from skipping it is not.

  • 03

    Use the correlated season model for reinsurance pricing

    Portfolio P99 loss is 43% higher when fire occurrence and severity share weather regimes (L3 vs. L2). Reinsurance treaties are priced to tail events — the 1-in-100-year season is the number that sets treaty attachment points. A model that treats each fire as independent systematically underprices that risk.

  • 04

    Draw explicit model boundaries for WUI conflagration

    Rothermel achieves 100% sensitivity for the Kincade Fire (wildland spread) and 12% for the Marshall Fire (structure-to-structure WUI conflagration). These are not both “wildfire” — they are different physics. Carriers underwriting both types with the same model are pricing one correctly and the other nearly blind.

  • 05

    Focus calibration effort on damage ratios, not fire spread parameters

    Sensitivity analysis shows the damage ratio drives 640–1380% of EAL variance. Wind and humidity matter. The burn probability threshold doesn’t. Most commercial cat model validation focuses on fire spread accuracy (perimeter matching); this study suggests the larger pricing error comes from damage ratio uncertainty, not fire physics.

References

Sources

No synthetic data. Every input to the fire model, property model, and portfolio model is sourced from public federal or state datasets.

  • 1USGS 3DEP / SRTM — Terrain elevation (100×100 grid, 200m/cell, 3 CA fire domains)
  • 2NLCD 2021 / Landfire — Fuel types (6 categories: grass, shrub, timber, urban, water, firebreak)
  • 3ASOS via Iowa Environmental Mesonet — Hourly weather (temperature, humidity, wind speed/direction, 73–408 hours per fire)
  • 4NIFC ArcGIS — Fire perimeters (GeoJSON polygons for Kincade, Camp Fire, Marshall)
  • 5NASA FIRMS VIIRS — Satellite fire detections (lat/lon, fire radiative power, confidence)
  • 6CA FAIR Plan / CDI 2023 Annual Report — Insurance exposure statistics and loss ratios (statewide aggregate)
  • 7NIST, Colorado DOI — Marshall Fire structure counts and insured loss totals

6 investigations · 3 real California fires (Kincade, Camp, Marshall) · 316 properties · 200 Monte Carlo draws · 4 fidelity levels · validated against FAIR Plan statistics and NIST post-fire investigations · all inputs from public sources.