The Ethics of Artificial Intelligence: What Are “Human” Values?

David DumasAI, Strategy and Planning

the ethics of artificial intelligence

What do the ethics of artificial intelligence look like? The answer to that question may forecast the next generation of AI algorithms according to John Thornhill of The Financial Times.

But what does ethical AI look like? That’s the question that Thornhill drives at in the FT piece. How do the ethics or artificial intelligence take shape in a world where “human values” are subject to cultural or social differences? What Americans consider to be ethical or “of value” may not be the same across international waters.

Ethical Boundaries?

During a recent event, Zeng Yi of the Chinese Academy of Sciences in Beijing questioned the need (and perhaps the wisdom) for a global set of principles for ethical AI:

“They should not compete with each other, but complete each other to provide a global landscape for AI,” he said. He even asked whether the attempt to “humanise AI” made sense, given that some Chinese researchers consider humans to be “the worst animals in the world”. Could robots not operate to a higher standard of ethics than humans? This talk of de-anthropocentrism, as it has been called, alarmed the western participants in the seminar, who argued it was a false and dangerous promise.

– John Thornill, The Financial Times: “Formulating values for AI is hard when humans do not agree

Indeed cultural standards of “ethics” and “morality” vary across international boundaries. Western cultures have a decidedly more individualist streak compared to the more collectivist streak of China.

Still, concepts such as “human rights” may seem universal. However, it is perhaps too easy to forget that this is not always the case. What then, will AI algorithms look like in, say, North Korea? And how will that differ from the ones employed in “friendlier” states.

We’ve written previously about the need for ethical standards with regard to artificial intelligence. The question, however, is who gets to create the ethics of artificial intelligence? By what standards?

China has adopted what Mr Lee (author of “AI Superpowers”) calls a “techno-utilitarian” approach, emphasising the greatest good for the greatest number rather than a moral imperative to protect individual rights.

– John Thornill, The Financial Times: “Formulating values for AI is hard when humans do not agree

For Star Trek fans, this may sound quite Spock-ian. “The needs of the many outweigh the needs of the one or the few.” However, what does that look like exactly?

What Is The Solution?

Arguments for processes, principles, or procedures in favor of the so-called “greater good” have led to questionable, if not disastrous, outcomes. Mass surveillance and privacy violations in the name of security come to mind.

A one-size-fits-all solution for the ethics of artificial intelligence seems unlikely. Given cultural differences, it’s hard to imagine a universally agreed upon set of rules would intricately chart our path forward. In fact, it might even hinder innovation in certain areas. However, as Thornhill argues, an ethical framework governing certain areas might be a good place to start.

To start, warfare and robotics are two areas Thornhill offers. Surely, even given cultural differences, we can create a set of guidelines for the application of AI in conducting warfare. The Geneva Convention created a set of rules for wartime. We can add to it.

What guidelines would be prudent to adopt? Perhaps a modified version of The Geneva Convention? The Hippocratic Oath? It may sound silly, but these are workable frameworks.

If you are looking for a more widely adopted framework, the 1948 UN Universal Declaration of Human Rights may be more suitable. Of the then 58 members of the United Nations, 48 voted in favor of the declaration and none voted against it.

Surely one could call into question how effective the UN declaration has been. After all, it hasn’t prevented violations of those rights in some areas. What it does accomplish, however, is a framework. This framework holds good actors to a standard of behavior. There are consequences for deviation.

Who’s Values Are More Important?

Cultural values will always differ. To that end, the ethics of artificial intelligence may always be subjective to a certain degree. It may be wise to therefore not even try to create a one-size-fits-all solution.

Different nations and even localities within those nations can create additional guidelines. However, a framework under which we all agree to operate may serve as an effective foundation for those guidelines.

Given the barrier to entry of artificial intelligence, wide-spread abuse is not yet likely or common. A truly egregious violation of the public trust (or worse) has yet to come to fruition. To truly “do AI” on a scale large enough to cause such harm is not a widely held ability. Mostly, we are talking about governments and large corporations who would even have the ability.

Still, the operating word there is “yet.” And while AI is still very much in its infancy, we would be best served by establishing the framework now for the AI-driven world we’re building for later.