Actuaries ponder using AI in their work, but worry about the risks

DI-StockdataCenter_04102017
audy_indy - Fotolia

Actuaries typically use mathematics and statistics but as advanced technologies become more available, including artificial intelligence, actuaries are navigating how to use them to analyze risk.

Digital Insurance spoke with three actuaries, who are all members of the Society of Actuaries,  about advancements in actuarial practices and how it has changed their jobs, as well as challenges to using predictive models including explainability

Hezhong (Mark) Ma, vice president and managing actuary, Reinsurance Group of America, Incorporated (RGA) shares the key skills he thinks are necessary for actuaries now. 

Hezhong (Mark) Ma

"Actuaries are analytical problem solvers," Ma said in an emailed response. "Beyond model-building skills, those who can understand the business context and communicate the results back will be highly marketable. The hardest thing to develop is our mindset. We have to learn, unlearn and relearn."

Some actuaries may have to change the way they do their jobs to take advantage of new forms of artificial intelligence, he said.

"Seasoned actuaries, like me, built our careers before AI was widespread," Ma said. "It became easy to self-censor our own creativity due to limitations of data, calculation power, or technologies. What was believed to be impossible might need a revisit. We should continue pushing the frontier, in designing new products, entering new distribution channels, and establishing new processes, to allow ourselves to be surprised."

Ma chairs the Society of Actuaries Research Institute's Actuarial Innovation and Technology Strategic Research Committee. Recently the committee sponsored a research project by E&Y: Predictive Analytics and Machine Learning – Practical Applications for Actuarial Modeling

The report highlights that life and annuity products are traditionally modeled with an approach that takes a great deal of human effort. 

"The most common runtime challenge faced by actuaries is when 'inner-loop' stochastic calculations are performed to calculate reserves or other metrics," the report stated. "Actuaries refer to this situation as 'nested stochastic.' For example, life insurers will perform extensive analysis to evaluate the solvency of the insurance company. This analysis requires the projection of the total asset requirement along with assets and hedge derivatives to evaluate the likelihood of insolvency. The runtime challenge facing actuaries has been exacerbated by the development of increasingly complex insurance products and changes in reporting frameworks. Actuaries can increase computing resources to address these requirements." 

Cost can be a barrier to implementing AI, but cloud computing could make the extra computing resources needed more affordable.

Michael Hoyer

Michael Hoyer, principal and director, product development and analytics at Milliman IntelliScript, a global actuarial consulting firm, said that computing power is key to manipulating large amounts of data and that using predictive models can lead to more pattern discoveries. 

Hoyer explains that actuaries can, for example, do pseudo mortality studies using prescription and medical data.

"You might find patterns within that data that uncover insights that previously weren't observable outside a real-world environment," Hoyer said. "When you apply these techniques to vast amounts of data, you can learn things that may counter conventional wisdom and find pockets of risk that actually look pretty good from a life insurance perspective, when in past paradigms, you might find that those combinations of risk factors caused discomfort."

Blake Hill, vice president North America sales at Dacadoo, a data analytics company, said while actuarial work has always centered on understanding data, what's evolved is models that have more advanced capabilities and can find patterns within the data.

"In my current company, we have built a machine learning model that stitches together data from a variety of research studies to help understand health risks of an individual, for insurance companies to use as they're gathering data on customers," Hill said. "[Insurers] can use our models to basically evaluate that data in terms of estimates of risks and mortalities and morbidities."

Blake Hill

The most challenging issue with applying AI into analysis and decision making, according to Ma, is ensuring the ethical and responsible use of the technology. 

"Historically, eliminating biases has been a challenge for the life insurance industry, and the availability of data and advanced algorithms are changing the landscape of self-selection. Many companies realize that changes are due and are working to gain clarity around how to test and mitigate unintended biases," Ma said. "RGA, along with several other companies and consulting firms, have published our own guidelines in ethical use of data and AI. Improving model governance and risk management could be a good start."

For example, transparency of the model and model building process is widely recognized as a good practice. 

"By empowering more people with the under-the-hood details, especially by engaging the subject matter experts in the area where a model is designed to serve, it is easier, though not a guarantee, for companies to catch unintended consequences of using AI," he said.

Hoyer adds that trying to figure out how to explain outcomes is doable but can be a challenge. 

"You can explain it from a math perspective," he said. "But what's challenging at times is having that math aligned with preconceived biases from historical research, or clinical expertise or underwriting expertise. In our case for the life insurance application, with math, we can justify exactly how the model is working."

But translating the model's logic into terms a human customer can understand can be hard.

"It's really a disconnect between how the model got to the answer and whether the person you're explaining it to accepts that as a reasonable explanation," Hoyer said. "So translating that is the challenge. But I think there's plenty of tools out there to do this and deconstruct how that model got to that prediction. It can inform how to improve the model in the future. If you consistently see something that doesn't align with conventional wisdom, you have to be able to at least come up with an explanation for why that's happening within the model."

Blake added that there are concerns with models that are more complex. 

"One of the things we do as actuaries is to understand the risk of using these models, whether it's reputation risk, misrepresentation, right through to legal risks that could be a part of that process," Hill said. "Some models can be traced or fully understood. … There are other types, typically like neural networks that are harder to dig into, which creates more risk. That's something actuaries have to balance and make sure that everybody within the company is aware of. … As we've built these more complex models it's harder to explain, but at the end of the day whether it's a business user or the end customer being able to explain is a really important part of this process."

Hoyer said that most large insurance companies are still using traditional underwriting techniques.

"I would still say that's the minority of the way that insurance applications are underwritten today," Hoyer said. "We hope that changes. I think that there's a lot of benefits to it. But I think that there's a misconception in the market, particularly from what we hear about that models are taking over."

For reprint and licensing requests for this article, click here.
Artificial intelligence Big data Machine learning Data Scientist Predictive modeling
MORE FROM DIGITAL INSURANCE