Noura Howell Receives NSF CAREER Award to Study Emotion AI

Move over, ChatGPT. Emotion AI, which infers people’s emotions from their facial expressions, is increasingly in the spotlight for its potential applications — as well as its problems and pitfalls. 

Georgia Tech’s Noura Howell is researching how we might reimagine the future of emotion AI, thanks to a prestigious Faculty Early Career Development (CAREER) award from the National Science Foundation.

The NSF CAREER award is a highly competitive five-year grant that helps promising scholars, those with the potential to serve as role models in research and education, to establish a basis for a lifetime of leadership in their field. The total award amount is $656,402.

“This award is a great honor and it represents an amazing opportunity to expand the field,” says Howell. “It also reflects the urgency of foregrounding a broader range of people’s perspectives on a technology that could have both positive and negative impacts on our future.”

Imagining Possible Futures — and Potential Problems

 

Where tools such as ChatGPT and other large language models are designed to understand and interpret the meaning and sentiment behind words, emotion AI looks to understand the emotion in human faces and voices.

The technology is already being used by call centers, in mental health applications, in education, and in other industries, and its use is expected to grow rapidly in the coming years.

“This industry is projected to reach about $43 billion in a few years,” says Howell, an assistant professor in the School of Literature, Media, and Communication. “But the futures that emotion AI companies are proposing are not inevitable. People can have an active voice in this.”

Howell’s research will encourage and explore that active voice. Howell will partner with community nonprofit Decatur Makers to host workshops for diverse teens and young adults to promote critical AI literacy where they will consider emotion AI’s possible benefits and harms.

“Imagining alternative future ways of living with technology invites discussion about those futures and how we might enact change in the present,” says Howell.

Howell says that with the proper ethical guidelines and privacy protections in place, emotion AI has the potential to support well-being and mental health. However, some uses also raise privacy and ethical concerns.

“Facial expressions are performative and emotions are complex,” she says. “If your face reveals irritation during a meeting with your boss, who has the right to that information? What if your irritability isn’t related to work at all, but stems from something else?”

Emotion AI also has been used in schools outside the United States to try to tell which students are paying attention.

“It’s often used to evaluate people in these really reductive ways, relying on a limited idea of what would be a ‘good’ facial expression in this context,” Howell says.

“And, for example, if the AI says students are inattentive, do you blame the students for not paying attention, or the teacher for not being engaging?”

Bias Concerns

Like generative AI, Howell says emotion AI is trained on data sets, often composed of photographs of people’s faces labeled with corresponding emotions.

“Looking at a series of images is pretty different from being engaged in a conversation, which is one problem,” Howell says. “Another is that emotional algorithms first rely on finding a face in an image.”

That’s hard for AI sometimes, with accuracy varying greatly depending on race and gender. That can introduce bias into emotion AI, Howell says.

Even if all such errors could be eliminated, Howell says the process also assumes that everyone expresses emotions in a consistent way.

Assistant Professor Noura Howell

“There are huge cultural differences in where, how, and in what ways people express emotions,” says Howell. “For example, how often, and in what situations, is it appropriate to give a really big smile, or when is that coming on too strong?”

More About Howell’s Project

Howell hopes that her work will help to foreground more diverse voices — and not just those of computer scientists, researchers, and scholars — in the conversation about our future with emotion AI.

“We all have emotions; we all have social interactions every day. So we all have very legitimate expertise to bring to the table here. We can all imagine where emotion AI could go in the future, and how we might protect against potential harms.” 

The School of Literature, Media, and Communication is a unit of the Ivan Allen College of Liberal Arts.