In data science and risk management, there’s a lesson that tends to surface early and stick for an entire career: it’s better to be roughly right than precisely wrong.
I first encountered this idea while developing and working with risk models, and it remains just as relevant today—perhaps even more so—as artificial intelligence dominates the conversation across industries.
AI’s Moment—and Its Limits
AI has been the topic of the year. Google Trends makes that clear: global search interest in AI has consistently exceeded even major global events, including the coronation of King Charles 👑. And unlike most hype cycles, interest in AI remains near all-time highs today.
Tools like ChatGPT are undeniably powerful. I use them regularly. But it’s important to be honest about what they deliver: approximate correctness.
The question isn’t whether AI is useful—it clearly is. The more important question is whether, and when, it will become precisely right.
Lessons from Credit and Default Modeling
Risk professionals have been here before.
Default prediction models, like most AI systems, are trained on historical, backward-looking data. They identify patterns in what has already occurred and extrapolate them forward. That approach works—until it doesn’t.
One of the persistent challenges in both consumer and commercial credit modeling is data quality. Modern datasets are massive, but they’re also noisy. Signal strength is often diluted by inconsistent reporting, structural shifts, behavioral changes, and exogenous shocks.
This is why experienced risk managers and data scientists don’t rely solely on traditional variables. Increasingly, the focus is on alternative data sources, better feature engineering, and more adaptive modeling frameworks that acknowledge uncertainty rather than overfitting precision.
Applying the Same Discipline to AI
AI models face the same constraints. They are only as good as the data they’re trained on and the assumptions embedded in their design. Precision can be seductive—but false precision is dangerous, especially in risk-sensitive environments.
Being “roughly right” often means:
-
Accepting probabilistic outputs rather than deterministic answers
-
Designing systems that adapt as data changes
-
Prioritizing explainability and robustness over cosmetic accuracy
This mindset has served credit and risk modeling well for decades, and it applies just as strongly to modern AI tools.
Building What’s Next
I’m excited to be building next-generation PD models, AI-driven tools, and financial infrastructure alongside other risk professionals who understand these trade-offs.
When applied thoughtfully, AI doesn’t just improve models—it improves client outcomes. Better risk assessment leads to better decisions, more tailored experiences, and ultimately stronger trust and brand loyalty.
AI doesn’t need to be perfectly right to be valuable.
It just needs to be honest about uncertainty—and designed by people who understand risk.
Photo by Nahrizul Kadri on Unsplash