Daniel S. Schiff
banner
dschiff.bsky.social
Daniel S. Schiff
@dschiff.bsky.social

Assist. Professor @purduepolsci & Co-Director of Governance & Responsible AI Lab (GRAIL). Studying #AI policy and #AIEthics. Secretary for @IEEE 7010 standard.

Mathematics 19%
Computer science 13%

7/7 Curious what you think—does this match what you're seeing in AI education assessment?

For researchers and educators working on AI literacy:

www.sciencedirect.com/science/art...
Development and validation of a short AI literacy test (AILIT-S) for university students
Fostering AI literacy is an important goal in higher education in many disciplines. Assessing AI literacy can inform researchers and educators on curr…
www.sciencedirect.com

6/7 🔬 Next steps: Validation beyond Western university samples, workplace applications, and cross-cultural AI literacy research.

With Arne Bewersdorff and Marie Hornberger. Thanks to Google Research for funding a portion of this work

@purduepolsci.bsky.social @GRAILcenter.bsky.social

5/7 🌍 Why this matters for AI governance:
Scalable assessment tools are essential for evaluating education programs, informing policy decisions, and ensuring citizens can navigate an AI-driven world.

AILIT-S makes systematic evaluation feasible.

4/7 🎯 Best use cases:
✔️ Program evaluation
✔️ Group comparisons
✔️ Trend analysis
✔️ Large-scale research

❌ Avoid for individual diagnostics

The speed enables broader participation and better population-level insights.

3/7 ✅ Results show AILIT-S delivers:
• ~5 minutes completion time (vs 12+ for full version)
• 91% congruence with comprehensive assessment
• Strong performance for group-level analysis

Trade-off: slightly lower individual reliability (α = 0.61 vs 0.74)

2/7 📊 AILIT-S covers 5 core themes:
• What is AI?
• What can AI do?
• How does AI work?
• How do people perceive AI?
• How should AI be used?

Special emphasis on technical understanding—the foundation of true AI literacy.

1/7 ⚡ The challenge: Existing AI literacy tests take 12+ minutes, making them impractical for large-scale assessment.

Our solution distills a robust 28-item instrument into 10 key questions—validated with 1,465 university students across the US, Germany, and UK.

How do you measure AI literacy in under 5 minutes? 🧵

We developed AILIT-S—a 10-item test that maintains 91% accuracy of longer assessments while being practical for real-world use.

Published in Computers in Human Behavior: Artificial Humans

www.sciencedirect.com/science/art...

Published in Computers and Education: Artificial Intelligence with my brilliant collaborators & PhD students Lucas Wiese and Indira Patil

www.sciencedirect.com/science/art...

@purduepolsci.bsky.social @GRAILcenter.bsky.social
AI ethics education: A systematic literature review
The potential of AI technology to transform human life, well-being, and daily work is faced with numerous risks and challenges yet to be fully account…
www.sciencedirect.com

🌟 AI ethics education has grown rapidly but is still finding its footing.

By focusing on interdisciplinary teaching, hands-on learning & better assessments, we can prepare the next generation to build AI systems that serve humanity responsibly.

🛠️ What needs to happen:

✅ Develop tools measuring behavioral impact of ethics education
✅ Integrate ethics across all levels (K-12 to university)
✅ Fund initiatives prioritizing formative assessments
✅ Align assessments with real-world skills

🚧 Major challenges we identified:
• Keeping up with AI's rapid evolution
• Teaching abstract concepts to diverse audiences
• Shortage of trained educators
• Misalignment between teaching goals & assessment methods

❌ The assessment gap: Programs aim to develop ethical reasoning & communication skills, but few measure if students are actually learning.

Summative assessments dominate (grades), but formative feedback—the kind that drives growth—is rare.

🎓 Pedagogy that works? Forget boring lectures.

Most impactful methods are hands-on:
• Case studies
• Group projects
• Gaming & storytelling

These engage students in real-world ethical dilemmas, making abstract principles tangible.

🔑 Key finding: The best programs go beyond "rules for algorithms."

They tackle societal issues—bias, fairness, privacy, social justice. Higher-ed leads with comprehensive curricula, but K-12 efforts are still catching up.

📚 We analyzed content, pedagogy & assessment practices across AI ethics education (2018-2023).

The results? A field full of promise but grappling with fundamental challenges in what to teach, how to teach it, and whether students are actually learning.

🌐 AI is everywhere—your workplace, social feeds, doctor's office. With this power comes ethical responsibility.

Bias, misinformation, privacy risks are just the beginning. How do we teach future engineers, policymakers & citizens to navigate these complexities?

🚨 AI is reshaping the world—but how do we prepare the next generation to wield it ethically?

Our systematic review of papers reveals AI ethics education has exploded since 2018, but faces critical growing pains.

www.sciencedirect.com/science/art... 🧵
AI ethics education: A systematic literature review
The potential of AI technology to transform human life, well-being, and daily work is faced with numerous risks and challenges yet to be fully account…
www.sciencedirect.com

7/7 How can educators better engage "Cautious Critics" and "Pragmatic Observers"?

For policy practitioners and educators working on AI literacy—curious what you're seeing?

www.sciencedirect.com/science/art...

#AIGovernance #ResponsibleAI #AILiteracy
AI advocates and cautious critics: How AI attitudes, AI interest, use of AI, and AI literacy build university students' AI self-efficacy
This study investigates how cognitive, affective, and behavioral variables related to artificial intelligence (AI) build AI self-efficacy among univer…
www.sciencedirect.com

6/7 Results suggest AI literacy isn't just about knowledge. It's about fostering interest, building confidence, and earning trust. Without addressing these factors, we risk leaving entire student groups behind.

@purduepolsci.bsky.social @GRAILcenter.bsky.social

5/7 Implications: AI programs need tailored approaches:

🚀 Advocates: Encourage critical thinking about ethical AI
🤔 Critics: Demystify AI, make it relevant to non-technical fields
⚖️ Observers: Use hands-on experiences to spark engagement

4/7 Demographics matter: AI Advocates are mostly male STEM students, while Cautious Critics are overrepresented in humanities and predominantly female.

Access to AI education varies widely—Critics report the least exposure 📈

3/7 Using clustering techniques, we identified 3 student profiles:

🚀 AI Advocates (48%): Tech-savvy, confident, excited
🤔 Cautious Critics (21%): Skeptical, low confidence, minimal use
⚖️ Pragmatic Observers (31%): Neutral attitudes, moderate interest

2/7 Key findings suggest:

✅ Using AI tools (like ChatGPT) boosts interest
✅ Positive attitudes predict higher engagement
✅ Interest acts as the bridge connecting attitudes, literacy, and confidence

Our validated path model: