Accenture says new tool will reduce bias in AI

(Bloomberg) --Consulting firm Accenture has a new tool to help businesses detect and eliminate gender, racial and ethnic bias in artificial intelligence software.

Companies and governments are increasingly turning to machine-learning algorithms to help make critical decisions, including who to hire, who gets insurance or a mortgage, who receives government benefits and even whether to grant a prisoner parole.

One of the arguments for using such software is that, if correctly designed and trained, it can potentially make decisions free from the prejudices that often impact human choices.

But, in a number of well-publicized examples, algorithms have been found to discriminate against minorities and women. For instance, an algorithm many U.S. cities and states used to help make bail decisions was twice as likely to falsely label black prisoners as being at high-risk for re-offending as white prisoners, according to a 2016 investigation by ProPublica.

Such cases have raised awareness about the dangers of biased algorithms, but companies have struggled to respond. “Our clients are telling us they are not equipped to thinking about the economic, social and political outcomes of their algorithms and are coming to us for help with checks and balances,” Rumman Chowdhury, a data scientist who leads an area of Accenture’s business called Responsible AI.

So Accenture developed a software tool that does three things: It lets users define the data fields they consider sensitive -- such as race, gender or age -- and then see the extent to which these factors are correlated with other data fields. Race, for example, might be highly correlated with a person’s postcode, so to de-bias an algorithm it wouldn’t be enough to simply avoid considering race; postcode would also have to be de-biased.

di-hard-drive-stock-061318.jpg
A Foxconn Technology Group hard drive cable from an Apple Inc. MacBook Pro laptop computer sits on a hard drive in an arranged photograph in Bangkok, Thailand on Saturday, July. 29, 2017. Apple are schedule to announce third-quarter earning figures on Aug. 1. Photographer: Brent Lewin/Bloomberg
Brent Lewin/Bloomberg

Chowdhury, who showcased the tool publicly for the first time Tuesday at an AI conference in London, said Accenture uses a technique called mutual information that essentially eliminates the bias in algorithms. The product also provides a visualization that lets developers see how the overall accuracy of their model is affected by this de-coupling of dependencies between variables.

Finally, Accenture’s method assess an algorithm’s fairness in terms of “predictive parity” -- are the false negative and false positive rates the same for men and women, for instance. And again, the tool shows developers what happens to their model’s overall accuracy as they equalize the predictive parity among sub-groups.

“People seem to want a push-button solution that will somehow magically solve fairness,” Chowdhury said, adding that such expectations are unrealistic. She said the value of Accenture’s tool is that it visually demonstrates there is often a tradeoff between overall accuracy of algorithms and their fairness.

She said, however, that while sometimes creating a fairer algorithm reduces its overall accuracy, it’s not always the case. In Accenture’s demonstration, which took German credit score data widely used by academic researchers investigating algorithmic fairness, improving predictive parity actually improves the model’s accuracy. And Chowdhury pointed to academic research showing that in many cases much fairer outcomes can be achieved with only a small decline in overall effectiveness.

Bloomberg News
Artificial intelligence Machine learning
MORE FROM DIGITAL INSURANCE