What does responsible AI look like in day-to-day use? Researchers in Georgia Tech’s School of Literature, Media, and Communication created an eight-question checklist to help AI users evaluate how they use the technology.
“We want to make abstract ideas like responsible AI and AI literacy more accessible,“ said Shi Ding, a Digital Media Ph.D. student. “Whether it’s teaching, software engineering, or healthcare, our goal is to develop a framework that can guide how AI should be responsibly used in different domains.”
Bridging a Gap
Ding and Brian Magerko, a Regent’s Professor of Digital Media, were inspired to create this checklist when they noticed how AI products are often built by highly technical developers, but not always used by them, resulting in gaps between design intent and real-world use.
For example, AI can be a helpful tool in classrooms. But if it directly answers questions rather than guiding students through critical thinking to solve a problem on their own, then it may not support learning. So how can users — a K-12 teacher, for example, or a parent helping their child with homework — determine that?
“There are tons of AI tools out there and no guidelines to select which will best support your needs,” said Ding. “Our framework can provide a starting point.”
The Digital Media Perspective
Ding said digital media researchers have the perfect skillset for such challenges because they look beyond the technology.
“We are more balanced. We notice technical things, but we also notice the biases of the technical systems and the dynamic human aspect,” she said.
To help a wider range of people use AI responsibly, Ding and Magerko based their questions not only on research on AI assistance, but also on human-centered AI, human-computer interaction, and learning science.
How to Use AI Responsibly
The researchers’ framework proposes that responsible AI must be evaluated in four areas: transparency, fairness and inclusivity, safety and data privacy, and pedagogy, agency, and human value. By asking two questions in each category, users can determine whether an AI tool is a good fit for their work.
- Transparency — Does the AI explain its decisions? Are sources provided?
- Fairness and Inclusivity — Does it support diverse users? Has it been tested for bias or adversarial prompts that can bypass safety guardrails?
- Safety and Data Privacy — Is data use (collection, storage, sharing) clear? Was a safety or privacy audit conducted?
- Pedagogy, Agency, and Human Value — Does it foster creativity, learning transfer, and critical thinking? Is feedback aligned with context (e.g., curriculum goals)?
“For example, if an AI tool doesn’t provide references with its answers, can you trust it to be correct without that transparency?” Ding asked. “AI can provide benefits, but it can also cause harm. And being aware of the benefits and harms is part of AI literacy. We hope people not only use AI, but also critically reflect on how it shapes their decisions.”
"Bridging Responsible AI and AI Literacy: The TEACH-RAI Framework and Toolkit for Education, Design, and Research" was published in Proceedings of the 57th ACM Technical Symposium on Computer Science Education Vol.2. It is available at: https://dl.acm.org/doi/epdf/10.1145/3770761.3777259.

Shi Ding is a Digital Media Ph.D. student in the School of Literature, Media, and Communication. Her research focuses on human-centered generative AI systems in educational contexts and responsible AI design. Read more about her work.