Is regulation coming for insurance analytics?

(Bloomberg Opinion) -- A backlash against big tech has sent lawmakers all over the world scrambling for ways to restrain the influence of computers over daily life. Now, Congressional Democrats are offering up an Algorithmic Accountability Act of 2019, an expansive and ambitious new take on how to regulate automated decision-making. Whether or not it becomes law, it’s a necessary effort to reassert human control as opaque algorithms take over bureaucratic processes.

Algorithms are being used everywhere: in credit decisions, mortgages, insurance rates, who gets a job, which kids get into college, and how long criminal defendants go to prison to name a few proliferating examples. Messy, complicated human decisions are being made, typically without an explanation or a chance to appeal, by artificial intelligence systems. They provide efficiency, profitability, and, often, a sense of scientific precision and authority.

di-data-protection-hearing-041719.jpg
Senator John Thune, a Republican from South Dakota and chairman of the Senate Commerce Committee, right, greets Keith Enright, chief privacy officer with Google Inc., left, during a hearing on consumer data privacy in Washington, D.C., U.S., on Wednesday, Sept. 26, 2018. Facing growing pressure to protect their customers' privacy, some of the biggest technology companies told Congress that they favor new federal consumer safeguards but diverged on some of the details. Photographer: Andrew Harrer/Bloomberg
Andrew Harrer/Bloomberg

The problem is that this authority has been bestowed too hastily. Algorithms are increasingly found to be making mistakes. Whether it’s a sexist hiring algorithm developed by Amazon, conspiracy theories promoted by the Google search engine or an IBM facial-recognition program that didn’t work nearly as well on black women as on white men, we’ve seen that large companies that pride themselves on their technical prowess are having trouble navigating this terrain.

And if that’s what we know about, imagine what we don’t. Most of the critically important algorithms in use have not been opened up for scrutiny, in large part because of laws protecting intellectual property.

The Democratic bill, introduced in the Senate and House of Representatives last week, would give the Federal Trade Commission power to require and monitor procedures by big companies to keep track of their algorithms and audit them for fairness and accuracy. It would apply only to companies with at least $50 million in annual revenue and would pertain even if intellectual property rights are involved, although it looks like the companies would have leeway in terms of whether they make the audits publicly available.

The idea is that obvious mistakes, or indeed subtle detours around existing anti-discrimination law, should be caught before they’re embedded in computer programs for deployment. (To its credit, Amazon.com Inc. didn’t use its sexist hiring algorithm.) Instead of assuming the best, in other words, companies will be required to provide evidence to the FTC that they follow relevant laws against discrimination. The inquiries would be done via third party auditors. (Disclosure: I run an algorithmic auditing company.) Companies would need to provide evidence that new algorithms are fair and accurate before being allowed to use them. This would be a huge step forward in terms of accountability, and is far from the case today.

Whatever the legislative fate of the Democratic bill, it’s an indication of what is to come. Evidence is mounting for the idea that algorithms should be subjected to public policy tests, and political will is gaining momentum. Even Facebook chief executive Mark Zuckerberg is asking for federal regulation. The alternative is to let black-box algorithms control people’s lives and subvert the popular will.

Bloomberg News
Artificial intelligence Analytics Fintech regulations
MORE FROM DIGITAL INSURANCE