Insurers ponder ethical concerns with AI amid consumer trepidation

Steve Satterfield, vice president of privacy and public policy at Facebook Inc., speaks via videoconference during a Senate Judiciary Subcommittee hearing
Steve Satterfield, vice president of privacy and public policy at Facebook Inc., speaks via videoconference during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., on Tuesday, Sept. 21, 2021. The hearing is titled “Big Data, Big Questions: Implications for Competition and Consumers.”
Photographer: Ting Shen/Bloomberg

In its  “Insurance 2030–the Impact of AI on the Future of Insurance'' report, McKinsey & Co. asserts that by the end of this decade, insurance underwriting will not look the same way it has for the past two centuries. Applicants for insurance have traditionally had to wait for a human underwriter to consult actuarial tables and other sources over hours, days, or even weeks, depending on the complexity of the product. Now, the process will likely be reduced to a few seconds and include machine and deep learning models using internal and external data, writ large – as artificial intelligence solutions are made available in areas that had been previously limited to rules-based engines.

Of course, insurers and insurtechs are already using AI across lines and in areas like claims processing and distribution. However, there are potential challenges ahead as the technology becomes increasingly more involved in modeling and pricing. A major concern of consumers and regulators are potential structural biases being built into AI. Can machines be adjusted to reflect equitable practices, or are they doomed to internalize – and build on – the conscious or unconscious biases of their programmers? 

Insurers have spent the past several years ramping up diversity and equity initiatives not just within their workplaces, but in their products and services as well.  

For example, the American Property Casualty Insurance Association says in its “Commitment to Social Equity” that “The events of 2020 triggered renewed dialogue about social justice and racial equity. Now America faces a ‘hinge of history’ moment with an imperative to work together to create a more inclusive, cohesive society.” In addition to workforce reforms the association started in 2015, it says it’s also looking at “how the industry can strengthen partnerships with community leaders to enhance outreach to minority and underserved consumers, and to address cost drivers that impact insurance costs.”

Those statements are echoed throughout the insurance industry, and have left digital leaders looking for a middle ground as they look to implement tech like AI into their workflows.

“The vast amounts of data and ever expanding computing power is accelerating the use of AI within the insurance industry. And while this tool can greatly aid businesses across the sector, it also raises new challenges to be addressed, including consumer privacy and safeguards to protect against unintended discrimination that may be built into algorithms,” Jon Godfread, North Dakota Insurance Commissioner and Chair of NAIC’s AI Working Group, said in a statement announcing principles for the use of AI in insurance, in August 2020. These guidelines include being fair and ethical, and respecting the rule of law to implement trustworthy solutions. 

In conversations with Digital Insurance, experts across insurtech reflected the boundaries of the debates around corporate governance and ethics, and how those interact with AI initiatives. Some say that “bias” is an inaccurate term to describe the problem. AI engines for insurance underwriting have to make value judgements in order to provide accurate pricing. What they can’t do is make those judgments based on immutable characteristics like race, says Eric Sibony, chief product and science officer and co-founder of Shift Technology, an insurtech that built an AI fraud detection system. 

“We need the algorithms to be biased, otherwise it would mean everything is the same. The algorithm is discriminating, [which] is a form of bias. What we don’t want is a bias related to personal characteristics,” Sibony says.

Anthony Habayeb, CEO and co-founder of Monitaur, an AI governance and software platform, says that while “intelligence” is in the name, AI is in danger of being overly anthropomorphized – that is, being treated as a conscious human itself with no recourse to change. It’s not too late to alter the trajectory of its implementation, he says. 

“Bias is a human problem, the context is we need to recognize that AI is another form of a system and a system is a product of people, process and tech,” says Habayeb. “AI cannot be the problem. The idea of ethical principles in AI should be an extension of [corporate ethics].”

Amaresh Tripathy, senior vice president and global supply analytics leader at Genpact, IT and business services firm, says there is a philosophical layer around establishing guidelines and having conversations about ethics. 

“There are a few places where those conversations are being forced. Banking for instance, you see a lot of it happening because of regulations,” says Tripathy, adding that healthcare and other financial services industries are also having ethical conversations. “Beyond that, in other industries, they’re at a level where people are learning about it rather than doing it.”

Tripathy suggests such questions as: What is fairness? What is equity? What is the responsibility within that? What is the role of the organization or company in society? 

“I think it goes back to the values of companies and it’s a reflection on the vision and mission statement,” says Tripathy. “Who is the owner of ethical AI in an organization? Raise that question.”

There is a point where diversity and equity concerns in AI development coincide with similar efforts being made in other parts of the insurance industry. At a time when insurers are looking to recruit the next generation of digital staff, Habayeb says that having a diverse group of programmers is going to be essential in avoiding unconscious biases creeping into algorithms.

“Tech and software isn’t the most diverse ecosystem,” Habayeb adds. “I’m a white male that is building a software company, there is a privilege… Are we walking the walk? It is not easy, I don’t always know if I’m doing as well as I can but I want to build a company that has a positive impact and we’re honest about the values.”

How it’s working
Lemonade, an AI-focused insurtech, has done just that. The company has engaged Tulsee Doshi, head of product for responsible AI at Google, as its AI ethics and fairness advisor.

Doshi tells Digital Insurance in an email that it is most critical for insurtechs to understand the history and social context of insurance as it connects to systemic discrimination.

“Insurance has been a critical part of economic infrastructure for centuries, and it is based on other layers of critical infrastructure–housing, transportation, etc. that have historically worked differently for and marginalized certain communities,” Doshi said. “Building this understanding is critical to considering and addressing it when building and designing products.”

Doshi said that she partnered with Lemonade because the company is being intentional about responsible AI and that there are conversations about when to use AI, how to measure and improve fairness and how to ensure humans are included. Those conversations come in context of a company that was involved in a class action lawsuit for allegedly violating biometric privacy laws in Illinois, after the company tweeted about how its AI analyzes customer’s videos for fraud last year. Lemonade recently settled the suit for $4 million, after swiftly recalling its Twitter posts that it termed “awful.”

The insurtech also recently released a podcast, Benevolent Bots, which focuses on ethical AI and is hosted by Doshi and Lemonade CEO Daniel Schreiber. Schreiber said in an email that ideally, rather than contributing to bias, AI can help solve some current-day bias concerns related to proxies for immutable characteristics, such as credit scores.

“Some feel that more data will only exacerbate a problem; however, in insurance I believe the opposite is true,” Schreiber said, adding that the company has been advocating for the use of Uniform Loss Ratio – where instead of pooling premiums, big data and AI are used to charge a person an individualized rate based on their specific risk.

Schreiber suggests that the first step to these conversations is for insurers to establish company values.

“Data can help immensely speed up processes, but in certain instances should still be viewed through a lens of human values that a company is aligned on,” he said.

In addition, Munich Re recently announced CertAI, a new AI validation service that provides proof of an AI systems trustworthiness.

Dr. Oliver Maghun, Munich Re senior project manager of artificial intelligence and co-founder of CertAI, said that at CertAI they assess trustworthy AI along six dimensions. Robustness, transparency, security and safety, fairness, autonomy and control and privacy.

“A trustworthy AI system is developed, deployed, operated and monitored in a way that in any time the relevant trustworthy dimensions are fulfilled,” Maghun said.

Privacy and cybersecurity concerns are both potential challenges to further AI implementation within the industry, but insurers are moving forward with the technology with those concerns in mind. 

“There is no replacement for humans in the loop,” Lemonade’s Doshi concludes. “Evaluating fairness in insurance is particularly complex because insurance is in the business of predicting risk–that risk may or may not come to bear, and so there isn’t common ground truth. As a result, it is important to evaluate algorithms in insurance in multiple different ways, across time.”

For reprint and licensing requests for this article, click here.
Ethical AI 2022 Artificial intelligence Machine learning Regulation and compliance
MORE FROM DIGITAL INSURANCE