Skip to main content
Causal Reasoning: Why did this happen?

Global Risk Intelligence Network

36 interconnected global risks. Bayesian inference reveals the cascades a spreadsheet can't. Click a risk, watch the world shift.

Node Color

Shows current probability (posterior). Green = low risk, amber = moderate, red = high. Brighter red means the inference engine calculates higher likelihood.

Low High

Node Size

Larger = higher posterior probability. Watch nodes grow when you inject evidence — the size change shows you how much each risk shifted.

Edge Thickness

Thicker arrows = stronger causal influence. A thick edge from Pandemic to Healthcare Overload (0.70) means a pandemic strongly causes overload. Thin edges (0.15) show weak but real connections.

Domain Clusters

Nodes cluster by domain (health, climate, economic, tech, geopolitical, societal). The most interesting cascades are the edges that cross between clusters.

Evidence Rings

Nodes you've clicked get a bright border ring — these are your evidence (things you're assuming to be true). Shift+click for negative evidence (assuming false). The rest of the network updates around your evidence.

Flowing Particles

Colored dots flow along edges from cause to effect. More particles = stronger activation. Watch them surge when you inject evidence — that's belief propagating through the causal chain in real time.

What to Look For

After injecting evidence: which nodes turned red that weren't before? Those are the cascade effects — risks that increased not because you clicked them, but because they're downstream in the causal chain.

Node: click to inject evidence · Edge: click to adjust strength · Shift+click: negate · Scroll: zoom · Drag: pan
Prior
Posterior
Delta

How It Actually Works

The same progressive disclosure as the Lab. Start with the intuition, go as deep as you want.

Noisy-OR: The Core Equation

Every node in the network computes its probability using a single equation called noisy-OR. The intuition: each parent can independently "cause" the child. The probability the child fires is one minus the probability that every potential cause fails.

Noisy-OR Probability
P(child = true | parents) = 1 - (1 - leak) × ∏active parents (1 - qi × pi)
leak = base rate (prior probability with no parent influence). qi = edge strength (how strongly parent i causes child). pi = parent's current probability.

What each term means

  • (1 - leak) is the probability the child does NOT fire spontaneously. Even with no parents active, events can still happen — pandemics don't need a trigger.
  • (1 - qi × pi) is the probability that parent i fails to cause the child. If the parent is definitely true (pi = 1) and the edge is strong (qi = 0.7), there's only a 30% chance this cause fails.
  • The product says ALL causes must fail independently for the child to stay false. Multiple active parents compound — two 50% causes together give 75% activation, not 100%.

Worked Example: Healthcare Overload

Healthcare Overload has base rate (leak) = 0.10 and two parents with evidence:

  • Pandemic = TRUE (p = 1.0), edge strength q = 0.70
  • AMR Crisis = TRUE (p = 1.0), edge strength q = 0.35
P(Healthcare Overload) = 1 - (1 - 0.10) × (1 - 0.70 × 1.0) × (1 - 0.35 × 1.0)
  = 1 - 0.90 × 0.30 × 0.65
  = 1 - 0.1755
  = 0.8245 (82%)
Without parent influence: 10%. With pandemic alone: 73%. With both: 82%. Multiple causes compound but with diminishing returns.

Why Noisy-OR?

The alternative is a full conditional probability table — for a node with 5 parents, that's 25 = 32 parameter combinations to specify manually. Noisy-OR needs just 5 edge strengths plus one base rate. For 36 nodes with ~100 edges, this reduces thousands of parameters to ~136 interpretable numbers. That's right fidelity applied to the model's own parameterization.

Backward Reasoning

Noisy-OR naturally computes forward: "if the parent is true, how likely is the child?" But real reasoning goes both ways. If we observe a recession, that should raise our belief in its causes (sovereign debt crisis, commodity shock, trade war). GRIN implements this with a single backward pass before forward propagation — evidence on children raises parent beliefs proportionally to edge strength, then forward propagation computes the full cascade.

Six Domains, 36 Nodes

Each node represents a risk event with an annual base-rate probability. The domains aren't silos — the most important cascades cross domain boundaries. A pandemic (health) causes recession (economic) which causes social instability (societal). The network captures these pathways explicitly.

Public Health

6 nodes: disease, healthcare capacity, drug supply, resistance, mental health, bioterror

pandemic, healthcare_overload, pharma_disruption, amr_crisis, mental_health, bioterrorism

Climate & Environment

6 nodes: weather, agriculture, water, tipping points, displacement, energy transition

extreme_weather, ag_shock, water_stress, tipping_point, climate_migration, energy_transition

Economic & Financial

7 nodes: prices, debt, currency, commodities, housing, rates, growth

inflation, sovereign_debt, fx_crisis, commodity_shock, housing_crisis, rate_shock, recession

Technology & Cyber

6 nodes: chips, cyberattack, AI disruption, tech bifurcation, privacy, AI safety

semiconductor, cyberattack, ai_disruption, tech_decoupling, data_privacy, ai_threats

Geopolitical & Security

6 nodes: great power conflict, regional war, nuclear, trade, energy leverage, alliances

great_power, regional_conflict, nuclear_risk, trade_war, energy_weapon, alliance_frag

Societal & Governance

5 nodes: unrest, trust, migration, democracy, disinformation

social_instability, trust_collapse, migration_crisis, democratic_backslide, misinformation

Key Cross-Domain Pathways

The network's power is in the connections that cross domain boundaries. Here are the most consequential pathways:

  • Health → Economic: Pandemic → Recession (strength 0.40). The COVID pathway: disease shutdown causes economic contraction.
  • Climate → Social: Ag Shock → Social Instability (0.35). Food price spikes cause civil unrest — a pattern repeated across the Arab Spring, 2007-08 food crisis.
  • Geopolitical → Tech: Great Power Conflict → Semiconductor Crisis (0.40). The Taiwan scenario: military confrontation disrupts global chip supply.
  • Tech → Societal: AI Disruption → Misinformation (0.40). AI-powered disinformation erodes shared reality.
  • Societal → Geopolitical: Social Instability → Democratic Backsliding (0.35) → Alliance Fragmentation (0.20). Internal unrest weakens institutions, which weakens international alliances.
  • Economic → back to Health: Recession → Social Instability (0.40) → Mental Health Crisis. The feedback loops matter.

Why 36 Nodes?

Too few (10-15): Misses cross-domain cascades entirely. A health-only model can't see that a pandemic causes recession causes social instability causes democratic backsliding. That 4-hop cascade is exactly what decision-makers need.

Too many (200+): Every pathway becomes a hairball. You can no longer trace why a given risk shifted. The model might be more "accurate" in some statistical sense, but it's useless for the decision it needs to support: where to invest in resilience.

36 is the right fidelity for cross-domain risk intelligence. Every node is interpretable. Every cascade is traceable. Every edge has a clear causal story. This is ADM Step 2 applied to the model itself.

Step-by-Step: The Polycrisis Cascade

Here's exactly what happens when you click "Polycrisis" — four evidence nodes injected simultaneously, and the inference engine computes the full cascade. Follow along by clicking Polycrisis above.

1

Evidence injection. Four nodes are clamped to P=1.0: Pandemic, Extreme Weather, Great Power Conflict, Sovereign Debt Crisis. These span four different domains.

2

Backward pass. Evidence on these 4 nodes raises their parents' beliefs. Sovereign Debt at P=1.0 raises Rate Shock (edge 0.30 backward) and Recession (edge 0.30 backward). Great Power at P=1.0 raises Nuclear Risk (edge 0.30, backward direction).

Rate Shock: 7% → ~18% (from backward pass alone)

3

First forward pass. The noisy-OR equation fires for every non-evidence node. Direct children update first:

Healthcare Overload: P = 1 - (1-0.10)(1-0.70×1.0) = 73%
Recession: multiple parents active → jumps to ~90%+

4

Second-order effects. Recession (now ~90%) activates its children: Social Instability gets hit from Recession (0.40), from Pandemic (0.30), from Extreme Weather via Ag Shock (0.35), and from Great Power via Regional Conflict (0.15).

Social Instability: 1 - (1-0.12)(1-0.40×0.9)(1-0.30×1.0)(1-0.35×0.6)... → 94%

5

Convergence. The engine iterates until all node changes are less than 0.0001%. Typically converges in 8-12 iterations. The final posteriors reflect both direct evidence and cascading indirect effects through the full causal structure.

What to Look For

  • Cross-domain amplification: Four shocks in four domains produce Social Instability at 94% — far higher than any single scenario. The interaction effects dominate.
  • Non-obvious movers: Commodity Shock jumps to 76% even though no commodity evidence was injected. It's hit from Extreme Weather (0.30), Great Power → Energy Weaponization (0.45) → Commodity Shock (0.55), and Regional Conflict (0.40).
  • The compounding problem: A risk matrix would score these four risks independently. The network reveals that their interaction is more dangerous than their sum. This is why causal structure matters.

Try It Yourself

Go back to the graph and try building the Polycrisis manually — click Pandemic, then Extreme Weather, then Great Power, then Sovereign Debt. Watch the posteriors shift incrementally with each added evidence node. The fourth node creates disproportionate effects because it activates pathways that the other three have already primed.

Inference Algorithm

GRIN uses iterative belief propagation with noisy-OR conditional probability tables. The algorithm has three phases:

1

Initialize. All nodes set to their prior (base rate) probabilities. Evidence nodes clamped to 0 or 1.

2

Backward pass (single). BFS from evidence nodes through parent edges, depth-limited to 2 hops. Raises parent beliefs proportional to edge strength with exponential damping (0.5depth). This enables diagnostic reasoning: observing a child raises belief in its causes.

3

Forward iteration. Repeat until convergence (or 25 iterations): for each non-evidence node with parents, recompute P(node) using noisy-OR from current parent beliefs. Re-clamp evidence each pass. Converges when max per-node change < 10-6.

Graph Topology

The network contains bidirectional edges (e.g., Trust Collapse ↔ Social Instability, Sovereign Debt ↔ Rate Shock). This means it's not a strict DAG — there are 9 reciprocal edge pairs forming feedback loops. The iterative propagation handles these naturally, converging to a stable fixed point. Iteration order for cyclic nodes is alphabetically deterministic.

Node Inventory

36 nodes · 6 domains · 101 directed edges
Public Health (6): Pandemic 8%, Healthcare Overload 10%, Pharma Disruption 6%, AMR Crisis 5%, Mental Health 15%, Bioterrorism 2%
Climate (6): Extreme Weather 25%, Ag Shock 12%, Water Stress 10%, Tipping Point 4%, Climate Migration 8%, Energy Transition 10%
Economic (7): Inflation 12%, Sovereign Debt 6%, FX Crisis 5%, Commodity Shock 10%, Housing 8%, Rate Shock 7%, Recession 10%
Technology (6): Semiconductor 8%, Cyberattack 12%, AI Disruption 15%, Tech Decoupling 10%, Data/Privacy 10%, AI Threats 6%
Geopolitical (6): Great Power 5%, Regional Conflict 15%, Nuclear Risk 2%, Trade War 12%, Energy Weaponization 8%, Alliance Fragmentation 6%
Societal (5): Social Instability 12%, Trust Collapse 10%, Migration Crisis 10%, Democratic Backsliding 10%, Misinformation 15%
Base rates are illustrative annual probabilities calibrated to rough historical frequency. They are not empirically derived — this is a demonstration of methodology, not a prediction.

Edge Strength Calibration

Each edge's strength (q value in the noisy-OR equation) represents "if this parent is definitely true, what's the probability it causes this child — beyond the child's base rate?" Strengths range from 0.15 (weak influence, e.g., Semiconductor → Great Power) to 0.70 (strong influence, e.g., Pandemic → Healthcare Overload).

Calibration is qualitative, informed by historical precedent and domain expertise. Example: Pandemic → Recession is set to 0.40 because pandemics reliably cause economic contraction (COVID, 1918 flu) but the magnitude varies. A higher value (0.70) would overstate certainty; a lower value (0.15) would understate the relationship.

What This Model Does NOT Do

  • Time dynamics. This is a snapshot model — "given what I know now, what should I believe?" It does not simulate how risks evolve over time. For trajectory questions, you need dynamic simulation models (agent-based, system dynamics, or reinforcement learning).
  • Interventional reasoning. Setting evidence answers "what if I observe this?" not "what if I intervene on this?" Intervention (do-calculus) requires different machinery. This is belief revision, not causal intervention.
  • Precise quantification. The base rates and edge strengths are illustrative. This demonstrates the methodology — how to build a causal reasoning system — not a production risk model. A production version would require empirical calibration and expert elicitation.
01

How This Works

The network above is a Bayesian network — a directed graph where each node represents an uncertain event and each edge represents a causal relationship. The math underneath is surprisingly simple: when you inject evidence (this event happened, or this event definitely didn't), every other node's probability updates through noisy-OR conditional probability tables.

Here's the intuition: each parent node can independently "cause" its child. The probability that the child fires is one minus the probability that every potential cause fails to activate it. This gives us the right structure — multiple causes compound, but redundancy has diminishing returns.

Why causal direction matters. A correlation matrix would tell you that pandemics and recessions are related. But it can't tell you that a pandemic causes recession (through economic shutdown) while a recession rarely causes a pandemic. When you're making decisions — where to invest in resilience, which risks to monitor — the direction of causation is everything.

Why 36 nodes. Enough to capture cross-domain cascades that simpler models miss entirely. Few enough that every pathway is interpretable — you can trace exactly why a given risk shifted. A 200-node model might be marginally more "accurate," but it would be a black box. That accuracy is wasted if the decision-maker can't act on it.

02

Why This Model and Not Another

Risk assessment has no shortage of methods. But most are wrong for this question — not wrong in general, wrong for "what should I believe now?" This is ADM Step 2: match the model to the question.

Agent-Based Simulation

OVER-ENGINEERED

Simulates individual actors over time. Powerful for trajectory questions, but this is a snapshot question — "given what I know now, what should I believe?" ABMs add complexity without adding insight for this task.

Correlation Matrix

MISSES DIRECTION

Shows which risks move together, but can't distinguish cause from effect. Pandemics cause recessions; recessions rarely cause pandemics. For decision-making, direction is everything.

Risk Matrix

TREATS RISKS AS INDEPENDENT

The 5×5 impact-likelihood grid. Each risk gets a dot. No connections, no cascades, no compounding. It's the model equivalent of plugging your ears and humming.

Bayesian Network

RIGHT FIDELITY

Captures causal direction. Updates beliefs from evidence. Reveals cross-domain cascades. Interpretable at 36 nodes. Answers the actual question: "given this evidence, what should I believe?"

This is the same discipline applied across the Lab: don't ask "what's the best model?" Ask "what's the right model for this question?" There, the question was "what should I do next?" and the answer was reinforcement learning with physics-informed world models. Here, the question is "what should I believe now?" and the answer is Bayesian inference over a causal graph.

03

The Commercial Pattern

The math is the same; the domain changes. Bayesian networks with noisy-OR CPTs solve a wide class of "what should I believe?" problems across industries.

Supply Chain Risk

Map supplier dependencies, geopolitical exposure, and logistics bottlenecks. When a port shuts down, instantly see which product lines are at risk and by how much.

Insurance Underwriting

Model correlated catastrophe risk. When hurricane season intensifies, update beliefs about flood claims, property damage, and business interruption simultaneously.

Predictive Maintenance

Sensor readings as evidence nodes, component failure modes as internal nodes. Diagnose root causes and predict failures before they cascade through the system.

Fraud Detection

Transaction patterns, behavioral signals, and account history as evidence. Update beliefs about fraud probability in real time, with interpretable reasoning chains.

Go Deeper

The Intervention Trap — When correlations lie about what to do. Simpson's Paradox is a classic example of why causal structure is the "physics" of decision models — without it, acting on data can make things worse.

The math is the same. The domain changes.
Let's talk about building a causal reasoning system for your domain.

Get in Touch