McKinsey research found today's technology could, in theory, automate about
At the same time, AI is already everywhere. A recent
If you zoom out from the noise, you get a simple picture:
- A big slice of our work is automatable
- Very little of it is fully automatable
- Almost everyone is experimenting with AI
- Most large transformation programs still miss the mark
For insurance, that should be a clue. The winning pattern is unlikely to replace underwriters.
Three types of underwriting decisions
When I look at underwriting workflows across carriers and MGAs, I see three broad categories of work:
- Mechanical work: This is the copy paste, the sorting of inboxes, the checking that three mandatory documents are actually present.
- Contextual judgment: Understanding why a contractor's loss pattern changed after a management shift. Knowing how a regional labor market affects frequency. Reading between the lines of a broker's email.
- Ethical and relational decisions: There are calls that are not just economic. Declining a long time partner's account because the risk truly sits outside appetite. Deciding how to respond after a serious claim. Balancing growth and fairness in a new program.
If we try to automate category two and three away, we are not being bold. We are being reckless.
Lessons from autopilot, streaming and Moneyball
Other industries have been here before us.
Commercial pilots have relied on autopilot systems for decades. On a typical long haul flight, automation handles a large proportion of the flying time. Yet nobody suggests removing pilots from the cockpit. The point of autopilot is not to replace the pilot. It is to free the pilot's attention for the rare and the unexpected.
Streaming services use recommendation engines to surface the next show you might binge. They do not decide what stories get made in the first place. Human writers still create the worlds that algorithms recommend.
In baseball, the "Moneyball" revolution did not replace scouts with spreadsheets. It changed which questions scouts asked and where they spent their time. Data narrowed the funnel. Humans still made the call.
Underwriting is closer to those examples than to a factory robot. We are dealing with human behavior, regulation, evolving perils and incomplete data. A well designed AI system here should feel more like a co-pilot or a research assistant than an invisible hand that silently approves or declines your book.
Designing underwriting
So what does that look like in practical terms for our industry?
A few patterns keep showing up in programs that actually work:
- Automate the "setup," not the soul: Use AI to handle intake, deduplication, document classification, data extraction and basic guideline checks.
- Make the model argue its case: Underwriters are rightly skeptical of black boxes. If a system flags an account as high risk or high priority, it should also show why. That can be as simple as "loss ratio trend, OSHA violations and social reviews" highlighted with links back to the source, not just a score on a dial.
- Keep humans in the escalation path: When the model is uncertain, when guidelines conflict or when the decision has reputational impact, the system should route to a human by design. We intuitively accept this in medicine and aviation. Insurance is not so different. There are real people and communities behind these policies.
- Let guidelines evolve like software: In many organizations, underwriting guidelines live in PDFs that get updated once a year. A centaur model treats guidelines almost like code. You can test changes on historical books, see how they would have affected hit ratio and loss ratio, and then push updates into the workflow. Humans still decide the principles. The system handles the execution.
- Measure learning, not just lift: Raw productivity gains are attractive. But a quiet benefit of human machine collaboration is that your underwriters get better faster. When every decision is paired with model explanations and eventual outcomes, you are effectively running a continuous learning loop for your people as well as your models.
A quiet mindset shift
The deepest change here is not technical. It is cultural.
It is the shift from asking "Can we automate this job?" to "Which parts of this job are uniquely human and how do we protect and amplify those?"
It is giving your underwriters permission to stay curious, to question model output, to send work back to the machine when it is doing something humans should not have to do.
It is accepting that progress will feel less like flipping a switch and more like building a long term partnership between people and tools.
If chess, aviation and entertainment are any indication, the future belongs to centaurs. In insurance, that means underwriters who are more informed, less buried in manual work and still fully accountable for the calls that matter.
My hope is that in a few years, the question will not be "When will underwriting be fully automated?" but "How do we build the best human machine teams for the risks we care about most?"








