Assist. Professor @purduepolsci & Co-Director of Governance & Responsible AI Lab (GRAIL). Studying #AI policy and #AIEthics. Secretary for @IEEE 7010 standard.
For researchers and educators working on AI literacy:
www.sciencedirect.com/science/art...
With Arne Bewersdorff and Marie Hornberger. Thanks to Google Research for funding a portion of this work
@purduepolsci.bsky.social @GRAILcenter.bsky.social
Scalable assessment tools are essential for evaluating education programs, informing policy decisions, and ensuring citizens can navigate an AI-driven world.
AILIT-S makes systematic evaluation feasible.
✔️ Program evaluation
✔️ Group comparisons
✔️ Trend analysis
✔️ Large-scale research
❌ Avoid for individual diagnostics
The speed enables broader participation and better population-level insights.
• ~5 minutes completion time (vs 12+ for full version)
• 91% congruence with comprehensive assessment
• Strong performance for group-level analysis
Trade-off: slightly lower individual reliability (α = 0.61 vs 0.74)
• What is AI?
• What can AI do?
• How does AI work?
• How do people perceive AI?
• How should AI be used?
Special emphasis on technical understanding—the foundation of true AI literacy.
Our solution distills a robust 28-item instrument into 10 key questions—validated with 1,465 university students across the US, Germany, and UK.
We developed AILIT-S—a 10-item test that maintains 91% accuracy of longer assessments while being practical for real-world use.
Published in Computers in Human Behavior: Artificial Humans
www.sciencedirect.com/science/art...
www.sciencedirect.com/science/art...
@purduepolsci.bsky.social @GRAILcenter.bsky.social
By focusing on interdisciplinary teaching, hands-on learning & better assessments, we can prepare the next generation to build AI systems that serve humanity responsibly.
✅ Develop tools measuring behavioral impact of ethics education
✅ Integrate ethics across all levels (K-12 to university)
✅ Fund initiatives prioritizing formative assessments
✅ Align assessments with real-world skills
• Keeping up with AI's rapid evolution
• Teaching abstract concepts to diverse audiences
• Shortage of trained educators
• Misalignment between teaching goals & assessment methods
Summative assessments dominate (grades), but formative feedback—the kind that drives growth—is rare.
Most impactful methods are hands-on:
• Case studies
• Group projects
• Gaming & storytelling
These engage students in real-world ethical dilemmas, making abstract principles tangible.
They tackle societal issues—bias, fairness, privacy, social justice. Higher-ed leads with comprehensive curricula, but K-12 efforts are still catching up.
The results? A field full of promise but grappling with fundamental challenges in what to teach, how to teach it, and whether students are actually learning.
Bias, misinformation, privacy risks are just the beginning. How do we teach future engineers, policymakers & citizens to navigate these complexities?
Our systematic review of papers reveals AI ethics education has exploded since 2018, but faces critical growing pains.
www.sciencedirect.com/science/art... 🧵
For policy practitioners and educators working on AI literacy—curious what you're seeing?
www.sciencedirect.com/science/art...
#AIGovernance #ResponsibleAI #AILiteracy
@purduepolsci.bsky.social @GRAILcenter.bsky.social
🚀 Advocates: Encourage critical thinking about ethical AI
🤔 Critics: Demystify AI, make it relevant to non-technical fields
⚖️ Observers: Use hands-on experiences to spark engagement
Access to AI education varies widely—Critics report the least exposure 📈
🚀 AI Advocates (48%): Tech-savvy, confident, excited
🤔 Cautious Critics (21%): Skeptical, low confidence, minimal use
⚖️ Pragmatic Observers (31%): Neutral attitudes, moderate interest
✅ Using AI tools (like ChatGPT) boosts interest
✅ Positive attitudes predict higher engagement
✅ Interest acts as the bridge connecting attitudes, literacy, and confidence
Our validated path model: