Can AI Make Ethical Decisions?

Tyler Cook is a postdoctoral scholar in the School of Public Policy, where he researches machine ethics — the study of ethical decision-making by artificial intelligence (AI).

He works with engineers and computer scientists at Georgia Tech to help think through the ethical implications of emerging technologies, which can be used in such ways as AI-assisted therapy, predictive policing, hiring algorithms, and autonomous warfare. Cook also teaches a class on AI ethics, where he poses three questions to his students:

  1. Is it possible to design AI with ethical decision-making capabilities?

  2. What is the best way to train the systems?

  3. If AI can make ethical decisions, what might be the consequences?

“I do think there could be a lot of benefits from having these kinds of systems around if we can accomplish it,” Cook said. “But is it risky? Yeah, it’s risky. It’s very risky.”

 

1. Is Ethical Decision-Making Possible in AI? (We Are Here)

"What would it take for an AI to engage in reasoning and decision-making in ethically controversial and, for now at least, distinctly human contexts?" Cook asks.

Does AI need empathy to make ethical decisions? What would it take for an AI to have empathy? Does it need to be conscious? How would we know if it was conscious? Can it be programmed to simulate consciousness?

In a simulated consciousness, is pattern recognition of past ethical judgments enough to reason through new cases? If AI systems forever lack consciousness — and the intuition and emotion that come with it — can they ever be trusted to reason competently?

And while we’re here, what’s really the value of emotions, anyway? Are they the building blocks of our moral systems, as some philosophers believe, or do they mislead our ethical reasoning?

“If you can’t already tell, there are a lot of different issues that emerge when thinking about ethical decision-making in AI,” Cook said. 

 

2. How Should We Train Ethical AI?

One option is by training the AI systems on data with a bottom-up approach, where we simulate experiences and hope that, through these simulations, it develops a moral compass similar to ours, Cook said. Another option is to directly code specific rules and constraints into the systems in a top-down way, while other AI literature suggests using a hybrid of the two, he explained.

Cook’s current research explores a common assumption he sees in AI ethics literature: that developing an ethical AI means making it as much like us as possible.

“There’s this idea that if we can just replicate the virtuous moral agent in our minds, we’re done. We’ve got an ethical AI,” Cook said. “This research challenges that assumption.”

Although AI shares certain capacities with us, like attention, information processing, and memory, “they’re already quite different in how they utilize those capacities, how advanced those capacities are, and how different they are in their systems,” he said. “And that’s something we should consider in terms of how that could affect their ethical decision-making.”

“If we assume we should design AI to engage in ethical decision-making in exactly the same way as humans, we’re going to miss out on some benefits. Maybe there’s more that we can do with these systems, and we need to consider those possibilities.”

 

3. What Might Happen Next?

If we trust AI with ethical decision-making, what might be the consequences? Cook sees benefits and risks in a future where that’s possible.

“First and foremost, we should be making sure that advanced AI systems are safe,” Cook said. “And if safety is an ethical issue, then making our AI systems ethical will also make them safe.”

For example, chatbots or large language models like ChatGPT won’t help someone design a bomb if the AI system knows that’s a bad idea, Cook said. “So, I think that is an obvious benefit.”

Cook is also interested in the possibility of AI helping us progress in our ethical thinking and uncover inconsistencies in our moral reasoning. He says that training AI can be compared to raising children in this scenario. We teach children right and wrong, but eventually, they form their own moral compass. And with AI, it could be the same way.

“There’s a danger there with AI systems forming ethical beliefs that we might not necessarily agree with. But there’s also a potential benefit, right? Maybe they could show us the way in a certain sense.”

The most significant risk Cook sees in trusting AI with ethically fraught decisions is our human tendency toward automation bias, which means putting too much trust in automated systems that may not deserve it.

“I worry about putting our trust in these systems when we shouldn’t be doing so,” Cook said. “Then we trust them with increasingly high-stakes scenarios, and they just give the wrong answers. But we don’t realize it due to our automation bias.”

Cook also worries about the concept of deskilling. Just like students might miss out on developing writing skills because they outsource their college papers to ChatGPT, he said the same could happen with our ethical decision-making and critical thinking capabilities.

“And if we’re not thinking about ethics anymore, we’re missing out on a pretty big part of being human,” he said.

So, do the benefits outweigh the risks?

“I’m not at all convinced that we can’t have reliable, ethical decision-making artificial agents, but I think we should move slowly and see how the systems do as they evolve,” Cook said.

“It’s not one of these things where you can say, ‘Did the AI system give the correct output?’ like with a mathematical problem, where you can check it for accuracy. With ethical outcomes, people are going to disagree.” 

 

AI Ethics at Georgia Tech

Ethics is a branch of philosophy, so what’s it doing at a technology institute like Georgia Tech?

Asking these questions around emerging technologies, especially ones as broadly impactful as AI, is critical, Cook said. Of course, most engineers, computer scientists, and researchers want to create safe, fair, and beneficial technologies, but that’s easier said than done — especially when ethics is outside their field of expertise.

At Georgia Tech, Cook said his ability to bridge the philosophical field of ethics with the real-world impacts of AI technology has been welcome and valued.

“Philosophy is usually a pretty solitary endeavor, so I feel lucky to be able to work with people with so many interdisciplinary viewpoints, especially in AI,” he said. “Because AI is all hands on deck.”