The Right Model Isn't What You Think.
I've spent twenty-plus years building models that people use to make high-stakes decisions. Missile trajectories. Satellite orbits. Architecture-level trade studies where the wrong answer means the wrong system gets funded for the next decade. Somewhere in those years — through MITRE, through seven years running my own defense consulting firm, through a career that started in math classrooms and ended up in space systems — I kept running into the same failure mode.
Teams would build the most detailed model they could afford, then discover it didn't actually help anyone decide anything.
The Fidelity Gap
Here's the pattern I see everywhere. An organization has a hard problem — grid reliability, contamination risk, disease prediction, infrastructure planning. They hire smart people. Those smart people build an impressive model: high resolution, enormous parameter count, trained on everything available. The model runs. It produces output. And then someone in the room asks the only question that matters:
"So what should we do?"
Silence. Because the model was built to be accurate, not to answer the question that needed answering. The fidelity was wrong — not too low, not too high, just aimed at the wrong thing.
In the defense simulation world, you don't get to brief a general on a kill-chain analysis without explaining exactly which parts of your model you trust, which parts you don't, and what the uncertainty means for the decision. That discipline — letting the question set the bar for the model — is what I call Analysis Driven Modeling. And it applies far beyond defense.
What I've Been Learning
Recently I've been doing something I should have done years ago: aiming ADM at real problems, with real data, across domains I care about. No toy demonstrations or synthetic datasets — real analyses where the answer might surprise me, and where I have to be honest when it does.
The studies on this site are the result. Each one takes a domain — energy, environment, healthcare, and more as they develop — and attacks it from every analytical angle I can think of — multiple models at different fidelity levels, with honest accounting of what works and what doesn't.
The findings have been more interesting than I expected. A few patterns keep showing up:
- The answer changes at every fidelity level. A screening model says one thing. A stochastic Monte Carlo says another. An optimizer says a third. The question is which answer you actually need for your specific decision.
- Sometimes the simple model wins. Published domain knowledge — the stuff in textbooks — occasionally outperforms machine learning. When it does, that's the right answer, not a failure of the ML.
- Sometimes more data adds nothing. I've run studies where adding entire new data categories — socioeconomic variables, behavioral data — improved predictions by less than a rounding error. Knowing when to stop adding inputs is as important as knowing which inputs to add.
- The most interesting finding is often "don't use AI here." If the analysis shows that a domain model is sufficient, saying so is more valuable than forcing a machine learning solution where one isn't needed.
Why I'm Writing This
This blog is where I'll think out loud about what I'm finding. Not polished case studies — those are in the studies section. This is the messier version: what surprised me, where my assumptions were wrong, what the fidelity question looks like when you actually apply it to energy grids and groundwater contamination and human health.
I'll also write about the methodology itself — how ADM works in practice, the patterns and anti-patterns, the places where it breaks down. And about what it looks like to use AI agents to do work that used to require a full team, while using ADM to keep all of it pointed at the right question.
An Invitation
If you've got a hard problem where the model matters — where getting the fidelity wrong means wasting time, money, or trust — I'd like to hear about it. I'm not selling a platform. I'm a modeler who's spent a career learning that the right model isn't the most accurate one. It's the one that answers the question you actually need to ask.
michael@rightfidelity.ai — I read every message.