Today's Jagged AI Is Good Enough to Fix Our Worst Decisions
Here is the thought that keeps nagging at me: even if AI models stopped improving tomorrow, they could still do enormous good in the world. Not because they are brilliant. Because we are so reliably bad.
Daniel Kahneman spent decades documenting how poorly humans make decisions under uncertainty. The research is damning. Judges hand down harsher sentences before lunch than after it. Doctors anchor on the first diagnosis they consider and under-weight contradictory evidence. Recruiters favour candidates who remind them of themselves. These are not edge cases. They are the baseline.
Kahneman's framework splits cognition into System 1 (fast, intuitive, pattern-matching) and System 2 (slow, deliberate, analytical). Most professional decision-making claims to be System 2 but operates on System 1. A tired parole judge running on instinct is not weighing evidence. A GP with seven minutes per appointment is not deliberating. The system forces even well-intentioned professionals into cognitive shortcuts, and Kahneman showed those shortcuts are systematically biased.
Now consider what current AI models can do. They do not get hungry. They do not anchor on the first piece of information. They do not have a bad Monday. They process the same case at 9am and 4pm with identical consistency. They are not perfect, and they carry their own biases inherited from training data and human labelling. But the research is increasingly clear: even biased algorithms are often less biased and less noisy than the humans they assist.
A 2024 (about 100 LLM-years ago) study found that GPT-4's diagnostic accuracy surpassed both individual doctors and doctors using AI as a support tool. Physician use of AI tools rose from 38% to 66% between 2023 and 2024. By January 2026, the FDA had authorised over 1,300 AI-enabled medical devices. In law, frontier models are already winning or tying against human lawyers 70% of the time in head-to-head evaluations.
I wrote previously about the double standard we apply to AI versus humans. We accept 39,000 road deaths a year from human drivers but balk at a single autonomous vehicle incident. We demand perfection from algorithms while tolerating systematic human failure. Kahneman would recognise this instantly: it is loss aversion and availability bias, applied to technology adoption.
The professions most resistant to AI disruption are often the ones that would benefit most. Law is a cognitive discipline protected by guild rules. Medicine operates under time pressure that forces System 1 thinking on decisions that deserve System 2. Both have vast unmet demand: legal deserts where people cannot afford representation, healthcare systems where seven-minute appointments are the norm.
The Jevons Paradox argument is compelling here. If AI makes legal advice cheaper and faster, demand expands to serve people currently priced out. If AI handles routine diagnostics, doctors can spend time on complex cases that genuinely need human judgement and empathy. The question is not whether AI replaces professionals. It is whether it frees them to do the work that actually requires being human.
The hardest problem remains a psychological one. Kahneman showed that humans are terrible at recognising their own biases. Professionals who have trained for years resist the idea that an algorithm might outperform them on routine decisions. This is a deeply human inability to see our own cognitive blind spots, as opposed to arrogance.
The practical takeaway is this: don't let great be the enemy of good. Current models, with all their jagged edges, can already reduce the noise and bias in high-stakes decisions. The benchmark should not be whether AI is flawless; rather, whether it is better than the status quo. And the status quo, as Kahneman spent a lifetime proving, is worse than most of us want to admit.