5 AI Ethics Concerns the Experts Are Debating

Just as social media exploded on the scene in the 2010s, artificial intelligence (AI) is having its moment. Decision-making algorithms have gone from science labs and sci-fi movies to everyday use in our homes — recommending movies, summarizing documents, and more. This technology comes with many benefits but raises many ethical concerns as well.

“AI systems are value-laden because they're human creations,” says Justin Biddle, the director of Georgia Tech’s Ethics, Technology, and Human Interaction Center (ETHICx) and an associate professor in Georgia Tech's School of Public Policy, where he teaches a course on AI ethics and policy. He specializes in the ethics of emerging technology and collaborates with scientists and engineers at Georgia Tech to design ethical AI systems.

“Humans generate, design, develop, distribute, and monitor AI systems. Human decisions are impactful throughout the AI development life cycle, and those decisions, reflecting the developers’ values, impact the performance of AI systems in a significant way,” he says.

For example, Biddle explains, AI systems can recommend whether or not someone is admitted to a university, hired for a job, or approved for a loan, and police departments use the technology to make decisions about how they should distribute officers and other resources. 

“These are incredibly consequential decisions that have, in some cases, lifelong impacts on people,” Biddle says. “So the stakes are very high.”

Biddle shared five of the most pressing AI ethics concerns the experts are debating today and the first steps we can take to address them.

1. AI and injustice

“How can we create AI systems that are just and equitable? Many AI systems are machine learning systems that are trained on data. If the data reflects historical biases or injustices, then the systems that are trained on these data will do the same. There are significant concerns about disparate impacts on historically marginalized groups by AI systems because of the data they're trained on. 

“For example, Amazon developed, implemented, and subsequently abandoned a hiring algorithm because they found that it was biased against women. The data they used to train the algorithm consisted of resumes submitted to Amazon, which were disproportionately from men. The problem of biased training data leading to biased AI systems is one of the most pressing AI ethics concerns.”
 

2. AI and human freedom and autonomy

“How can we develop and implement AI systems that promote human freedom and autonomy rather than impede it? AI can be used to influence human behavior, sometimes in ways that are imperceptible and ethically problematic.  

“AI systems have been used to influence voter behavior, as we saw in the Cambridge Analytica scandal and in the proliferation of deep fakes in political campaigning. AI systems can be used to influence workers to work longer hours, for example by nudging rideshare drivers to drive longer. Additionally, the data used to train AI systems are sometimes collected without informed consent, raising concerns about privacy and data protection.  

“AI systems should promote the freedom and autonomy of all human beings, not just those who have the power to develop and profit from AI. How to achieve this goal is a challenging problem.”
 

3. AI and labor disruption

“How can we develop and deploy AI systems that create opportunities for meaningful work and well-paying jobs rather than contributing to technological unemployment? And how can we facilitate equitable access to those jobs?  

“There is no question that AI is disrupting labor markets, impacting available jobs and required skills. It is also clear that AI is disrupting many different kinds of work — it is not just jobs involving routine skills that are being affected, but also creative work. What is less clear is what the nature of these disruptions will be, who will benefit, and who will be harmed.  

“Given that so many workers will be affected — in many different sectors and at many different levels — it is crucial that processes for developing and deploying AI systems are inclusive and involve the participation of those likely to be affected. How this should be done remains an important and open question.”  
 

4. AI and explainability

“Given that many AI systems make decisions or recommendations that can be life-changing for those impacted, it is important that those decisions be explainable, interpretable, and easily communicated to those who are thus affected. There are challenging technical questions surrounding the creation of explainable AI — and there are also challenging societal questions surrounding the extent to which we should allow unexplainable AI systems to affect people’s lives.  

“The European Union’s General Data Protection Regulation (GDPR) has attempted to curb the impacts of unexplainable AI. For example, if someone is denied a loan based on an AI recommendation, they should have recourse under this regulation to an explanation of why their application was rejected. Whether a regulation like this should be adopted in the United States is an important societal question, and addressing this question well requires expertise from relevant technical and social scientific fields, as well as from affected publics.”
 

5. AI and existential risk 

“The possibility of superintelligent AI is the subject of much discussion — in films, fiction, popular media, and academia. Some prominent AI developers have recently raised concerns about this, even suggesting that artificial general intelligence could lead to the extinction of humanity. Whether these fears are realistic — and whether we should be focusing on them over other concerns — is hotly debated.  

“Many think that these fears are more science fiction than reality and that a focus on them will distract from the ways in which AI systems are actually putting people at risk today. In particular, a focus on such fears could distract from how AI systems are currently exacerbating existing inequalities.  

“Others — perhaps impressed with the achievements of large language models such as ChatGPT — believe that artificial general intelligence is on the horizon and that existential risks need to be taken seriously now. For them, there are real and pressing concerns about superintelligence and AI becoming smarter and more powerful than we are.” 

What can we do to address these concerns?

In Biddle’s class, students discuss all of these concerns and more. AI is so pervasive now that understanding it is a form of general literacy, he says, and he hopes his students will help shape much-needed AI policy in coming years, both in the United States and abroad. 

“We need to be making decisions about what kinds of policies we want at the federal level,” Biddle says. 

“The European Union has moved quickly, implementing the GDPR, which regulates data processing, and it looks like they will soon pass the EU AI Act. This is a Europe-wide regulation that covers AI in general, which has requirements for AI developers and others. At the moment, there’s very little policy in the United States that covers AI in general or data processing in general.

“The Biden administration has published the AI Bill of Rights, which I think is promising. It specifies people's rights with respect to AI, and it advances discussions about how the principles it articulates could be operationalized and put into practice. But these are all voluntary recommendations and guidelines; they're not mandatory. We need to be having discussions about what kind of mandatory regulations should be put into place.

“It is important that, whatever policy initiatives we adopt, we promote public participation and democratic governance of AI. Ensuring the ethical and responsible design of AI systems doesn’t only require technical expertise — it also involves questions of societal governance and stakeholder participation. Because the impact of AI will be so disruptive and wide-ranging, it is crucial that those who are significantly impacted have the ability to participate in AI creation, deployment, and monitoring.”