Predictions are hard, even with lots of data

For those following the US presidential election in November 2016, Nate Silver’s FiveThirtyEight site was a pretty reliable resource. Silver had an impressive track record in employing data analytics and data science to predict the outcomes of elections and sporting events. The 72% chance he gave Hillary Clinton of winning was lower than other outlets, but reaction to that number in the wake of Donald Trump's victory prompted a follow-up post two months later explaining where the data had gone astray, and how some had erred in their interpration of his and others' models.

In addition, a study found recently that “Minority Report” scenarios aren’t quite yet delivering on “precrime” predictions as once anticipated. The study, by Julia Dressel and Hany Farid of Dartmouth College, shows that widely used commercial risk-assessment software “is no more accurate or fair than predictions made by people with little or no criminal justice expertise." This algorithmic tool is already being used in some courtroom settings to determine the likelihood of defendants engaging in future criminal acts. Worse yet, the study of more than 7,000 individuals arrested in Broward County, Florida between 2013 and 2014 finds the software’s predictions were racially biased. (A summary of the study is also available in the New York Times.)

This development, of course, has implications for the insurance industry, which is increasingly drawing on data analytics algorithms to predict everything from potential weather events to driving habits to fraud to structural viability to customer churn. It calls into question the efficacy of all these tech investments. Are we actually getting more valuable, actionable insight?

servers 16.jpg

Data analytics can deliver many capabilities, and with machine learning now coming into common usage, these systems and algorithms are capable of refining and improving their predictive powers. But there will always be wildcard factors that may intervene and throw predictions off. In addition, there is the inherent bias of the developers of algorithms (more often than not often white males) that will percolate through the insights delivered.

That’s why the analytics process needs to be a combined effort of human knowledge, supported by machine insights. Ultimately, it’s only humans that can think strategically, can identify opportunities and understand the resources to be marshaled to act on opportunities.

If anything, the increased reliance on data is creating new categories of professionals. In an article in MIT Sloan Management Review, H. James Wilson, Paul R. Daugherty, and Nicola Morini-Bianzino, all with Accenture, say there will be a need for three classes of professionals who can “train” AI and analytics systems to achieve more correct insights: “trainers,” “explainers,” and “sustainers.” It’s likely that insurance companies are already bringing people with such roles into their organizations.

The trainers, for one, will be tasked with teaching AI systems how they should perform, the authors state. “At one end of the spectrum, trainers help natural-language processors and language translators make fewer errors. At the other end, they teach AI algorithms how to mimic human behaviors,” especially empathy. Imagine having a conversation with Amazon’s Alexa, in which she gave well-thought-out answers instead of canned responses.

The explainers would work closely with management to identify and select the best software for particular tasks. The sustainers will work to ensure that AI and analytics systems are operating in an ethical way, and that any unintended consequences are managed.

No matter how much data is available – even if it is oozing out of every corner of the organization – there needs to be a cadre of professionals who make sure AI and analytics systems are on target.

For reprint and licensing requests for this article, click here.
Analytics
MORE FROM DIGITAL INSURANCE