What preventive measures can break data bias barriers

Silhouetted pedestrians near the Roppongi Hills complex in Tokyo, Japan, on Saturday, Oct. 1, 2022. Tankan business confidence and Tokyo CPI data at the start of the week will show the strength of price gains in September and the impact of inflation and global recession concerns on corporate activity. Photographer: Akio Kon/Bloomberg
Silhouetted pedestrians near the Roppongi Hills complex in Tokyo, Japan on Oct. 1, 2022.
Akio Kon/Bloomberg

Data has always been vital to insurance. It is at the core of the product informing both underwriting and claims. Today, with technology like artificial intelligence and machine learning, we can capture and analyze endless amounts of data to improve the accuracy of our work. But buried underneath and interlaced into all this data is bias — an under-discussed part of insurance that is embedded into programs and data sets at their conception. 

These ingrained biases and prejudices aren't intentional decisions, they're unconscious. Either way, they can still have wide-reaching effects and are exacerbated by new technologies. AI and ML can find patterns in data, but when they're trained on biased data, they can compound the issues and perpetuate a self-confirming data bias cycle. 

The changing modes of acquiring data also contribute to the issue. Insurance companies aren't just pulling historical data, they're building profiles from social media platforms (which often have their own data issues) and other alternate sources of information. As a result of these new data acquisition methods, there is a disproportionate negative impact on customers across lines of race, gender, sexual orientation, and more. 

Certain factors are important to determine insurance eligibility. However, not all of them may be needed for every product, and some may even be superfluous. For example, why does race or gender matter for a disability claim? No one race or gender is more prone to accidents or disability occurrences. It is these types of questions that we need to be asking to root out data bias and begin to create an equitable insurance industry. 

As widespread as data bias is, it's not discussed enough in the industry, and insurance professionals are left wondering what they can do to mitigate it. 

Data does not collect itself. Humans collect data. This means that from the get-go, data is already exposed to a certain level of bias and to address the issue, it requires a systemic look at how insurance chooses, sources, and analyzes data. 

Data collection
There are a lot of channels (some unregulated) that are willing to sell data, but this doesn't necessarily mean that data is needed for insurance. Insurers should only collect the data that they need. When unnecessary and sensitive information is shared, it could lead to more unjust decisions made around product creation and underwriting. Additionally, insurers need to be intentional about data collection by setting up a data privacy and anti-data discriminatory task force to review how, where, and what they collect. 

Review processes
Another part of mitigating data bias is establishing an organizational review process for just this issue. This could be done by an external team or an internal team. By partnering with a trusted, third-party vendor to review your processes and procedures, the vendor can point out and help identify where the organization may be lacking in terms of data collection strategies and analysis. Assembling an internal, dedicated data bias identification taskforce composed of a diverse group of people across the organization is another way to help find pitfalls in the organization and see where data bias could be in all aspects of the insurance business. 

Inclusive and diverse environments 
It is vital that insurance companies create an environment of inclusivity, diversity and equity. It is only by having these three ingredients that the data we seek, collect, and treat will decrease bias going forward. The industry overall has improved on increasing DEI by actively ensuring to include diverse perspectives in leadership and key decision-making positions. For insurance companies that do not have diverse organizations, hiring consultants and data scientists becomes vital to tackling inequality in insurance. These are the determining factors for insurance companies to put a halt to discriminatory practices in data. 

Digital transformation
Though social stances and many workplaces have evolved to be more inclusive with a diversified workforce, the processes and tools used to analyze data and make decisions have not evolved as fast. As a result, insurers are working with outdated systems that aren't updated to reflect changing norms and aren't equipped to address the data bias issue. As insurers look to undergo digital transformations, it is vital that they keep data bias in mind. It is not enough to simply update their systems; they must actively seek out ways to improve their processes to mitigate data bias and ensure that people are insured based on fair and nondiscriminatory criteria.

Conclusion
Everyone deserves to have protection through insurance, but the way the industry currently approaches data prevents that from happening. While there have been some productive conversations around data bias, many aspects of data in insurance still need to be rethought, recast, and challenged to create an insurance landscape that enables everyone to get the coverage they need.

For reprint and licensing requests for this article, click here.
Customer data Racial bias Unstructured data Machine learning Artificial intelligence Digital Transformation
MORE FROM DIGITAL INSURANCE