Understanding the key role of ethics in artificial intelligence

This week, the Artificial Intelligence conference in New York drew thousands of attendees to learn more about the many elements and practices that go into the popular field. That included the topic of ethics in AI and why organizations need to be aware of the negative potential of AI as well as its benefits.

Information Management spoke with Sheldon Fernandez, chief executive officer at DarwinAI, about how organizations can put ethics best practices into play with their AI efforts. Fernandez spoke at the Artificial Intelligence conference on he topic of “Ethical AI: Separating Fact from Fad.”

Information Management: Your session at the Artificial Intelligence conference in New York was about “Ethical AI: Separating Fact from Fad.” What exactly do you mean when you talk about ethical artificial intelligence?

Sheldon Fernandez: Ethical Artificial Intelligence (AI) refers to the effort to ensure that AI systems behave in a way that is morally acceptable by human standards.

The key questions, of course, is what these standards entail and how one goes about implementing them in an AI system. Given the proliferation of AI, both are active areas of research amongst academics, industry experts and policy-makers.

IM: What are the facts versus the fad here?

Fernandez: It has become faddish to talk about the important of ethical AI and the need for oversight, transparency, guidelines, diversity, etc., at an abstract and high-level. This is not a bad thing, but often assumes that such ‘talk’ is concomitant in addressing the challenges of ethical AI.

AI ethics.jpg

The facts, however, are much more complex. For example, guidelines themselves are often ineffective (a recent study showed the ACM’s code of ethics had little effect on the decision making process of engineers). Moreover, even if we agree on how an AI system should behave (not trivial) implementing specific behavior in the context of the complex machinery that underpins AI is extremely challenging.

IM: When you talk about artificial intelligence systems behaving ethically, how do they do that?

Fernandez: That, as they say, is the $64,000 question.

For example, with many modern AI techniques, the system’s behavior is a reflection of the data the system is trained against and the human labelers who annotate that data. Such systems are often described as ‘black boxes’ because it is not clear how they use this data to reach particular conclusions, and these ambiguities make it difficult to determine how or why the system behaves the way does.

In this context, ensuring that a system behaves ethically is quite challenging as its behavior is not predicated on simple rules, but is rather the emergent byproduct of numerous surrounding factors. Put another way, AI systems learn by looking at data from millions of examples. It is difficult to predict how they’ll behave in new scenarios outside these examples.

IM: How does an organization best determine what is ethical behavior in the first place, to instill that into AI programs?

Fernandez: This is where the role of committees and guidelines are crucial, as they take the responsibility of prescribing behavior away from the arbitrary quirks of a single engineer and place them in the hands of collaborative group that typically consists of policy-makers, industry experts, philosophers, ethicists and engineers.

In this way, ethical guidelines are deliberatively determined through collaboration (though implementing them robustly is, again, another story).

IM: Why is ethics in artificial intelligence software important?

Fernandez: Ethics in AI is extremely important given the proliferation of AI systems in consequential areas of our lives: college admissions, financial decision-making systems, and what the news we consume on Facebook and other media sites.

Moreover, there are areas where AI decision-making can literally be the difference between life and death: autonomous vehicles, lethal weapons, health care diagnosis. In such cases it is paramount that AI adheres to the ethical standards we set forth for it.

IM: What would be an example of “unethical AI?”

Fernandez: Clear examples would be racist or sexist behavior.

In 2016, for example, the COPAS Parole Algorithm received negative press when it was discovered that the software predicting the future of criminals was biased against African Americans. Because the system was trained using historical data it simply mirrored prejudices in the judicial system.

In another recent and widely publicized example, a recruiting tool created by Amazon began favoring male candidates over female candidates as a result of the historical data it was fed.

IM:. Are there certain industries or types of companies where ethical AI is more important, and if so, why or how so?

Fernandez: It is important in all industries, of course, but is especially acute in scenarios involving the general welfare of human beings: autonomous vehicles, medical diagnosis, weaponry, law enforcement, etc.

IM: What do you see in the near future for ethical AI in terms of its adoption and evolution?

Fernandez: The ‘faddish’ talk around the important of ethical AI will continue, but it will slowly be coupled with the need for practical guidelines and best practices in specific verticals including autonomous vehicles, health care and financial services.

For reprint and licensing requests for this article, click here.
Artificial intelligence Corporate ethics Data strategy
MORE FROM DIGITAL INSURANCE