How human biases can skew artificial intelligence tools

Imagine a scenario where you need your car’s onboard navigation system to place an emergency call, but it won’t. Or arriving extra early for every international flight because airport security scanners never recognize your face.

For many people—especially people of color and women—these scenarios can be a frustrating reality. That’s because the AI that’s supposed to make life easier for us all doesn’t include diverse enough data to work for everyone. This is a big problem, but one that can be fixed.

Personal dignity, travel safety, and job hunting are just some of the aspects of living that can be improved with algorithms--if the technology learns to recognize and properly classify a full range of voice and faces. However, New York University’s AI Now center reported in April that a lack of diversity in the AI industry contributes to biased tools that inflict real-world harms. These biases can undermine the benefits that AI offers in so many areas of modern life.

How AI-powered biometrics can be biasedMultiple studies have found that facial and voice recognition algorithms tend to be more accurate for men than women. Facial recognition programs also have trouble correctly identifying transgender and nonbinary people. Image-recognition algorithms are often more accurate for people with lighter skin than people with darker skin. These discrepancies can create problems for users, ranging from inconvenient to potentially life-threatening.

For example, Georgia Institute of Technology researchers found that self-driving car safety technology doesn’t recognize pedestrians with dark skin as well as it spots white pedestrians. Why? The study authors say a teaching dataset with a lot of light-skinned people, coupled with too little emphasis on the darker-skinned people in the set, effectively teaches object-detection models to be better at recognizing white pedestrians. Self-driving car developers need to correct this disparity to prevent their cars from hitting people of color because they don’t recognize them as pedestrians and stop for them.

In airports, the Department of Homeland Security is testing facial-recognition biometrics to keep international air travelers safe. But this may result in more time-consuming and invasive screenings for flyers whose faces aren’t recognized properly by the AI. Some facial-recognition technology has trouble correctly identifying people of color and women—especially women with darker skin.

Like the biased pedestrian-recognition AI, these facial recognition algorithms were trained with datasets that skewed white and male. And as with pedestrian recognition, facial recognition algorithms need to learn from datasets that contain a fair mix of skin tones and genders.

Voice recognition technology is supposed to make lots of everyday tasks easier, like dictation, internet searches, and navigation while driving. However, since at least 2002, researchers and the media have documented cases of voice recognition working significantly worse for women than for men, because the algorithms are trained to recognize lower-pitched, typically masculine voices as human speech. The problem hasn’t yet been solved—a writer for the Guardian described in April the repeated difficulties her mother had telling her Volvo to make a phone call, until she deliberately lowered the pitch of her voice to sound more like a man.

University of Washington graduate student and Ada Lovelace Fellow Os Keyes has raised concerns that facial recognition systems could harm trans and nonbinary people in multiple ways. Security systems that scan faces to let residents into their apartment complex or monitor public restrooms could misidentify people who don’t clearly fit into one gender category. That, Keyes argues, could lead to more encounters with law enforcement, and raise the risk of arrest or injury.

It’s tempting to think that because algorithms are data-driven that they will generate impartial, fair results. But algorithms learn by identifying patterns in real-world datasets, and those datasets often contain patterns of bias—unconscious or otherwise. The challenge for AI developers is to find or build teaching datasets that don’t reinforce biases, so that the technology moves all of society forward.

Solving the AI bias problemUsing more inclusive datasets for AI learning can help create less biased tools. One startup, Atipica, is working to create an inclusive HR algorithm based on a broad dataset that pulls from resume and government data to accurately reflect workforce demographics.

Other steps are needed, too. Right now, the AI industry includes very few women, people of color, and LGBTQ people. A more diverse AI workforce would bring more perspectives and life experiences to AI projects. Transparency and bias-testing of AI systems can identify problems before products go to market and while they’re in use.

NYU’s AI Now center recommends bringing in experts in a range of fields beyond AI and data science to offer a broader perspective on how AI works in real-world situations. Some industry-watchers say regulation may be part of the solution. In Washington, some senators have proposed legislation that would require “algorithmic accountability” for companies that develop AI tools.

AI has the potential to make our lives safer and easier, if we train the systems to be inclusive and use them wisely. By taking steps now to eliminate biases in AI, we can make sure that advances in this technology move everyone forward.

For reprint and licensing requests for this article, click here.
Artificial intelligence Machine learning Data management
MORE FROM DIGITAL INSURANCE