Matt Groh
banner
mattgroh.bsky.social
Matt Groh
@mattgroh.bsky.social
Assistant professor at Northwestern Kellogg | human AI collaboration | computational social science | affective computing
On my way to @ic2s2.bsky.social in Norrköping!! Super excited to share this year’s projects in the HAIC lab revealing how (M)LLMs can offer insights into human behavior & cognition

More at human-ai-collaboration-lab.kellogg.northwestern.edu/ic2s2

See you there!

#IC2S2
July 21, 2025 at 8:25 AM
This taxonomy offers a shared language (and see our how to guide on arXiv for many examples) to help people better communicate what looks or feels off.

It's also a framework that can generalize to multimedia.

Consider this, what do you notice at the 16s mark about her legs?
April 25, 2025 at 3:15 PM
Based on generating thousands of images, reading the AI-generated images and digital forensics literatures (and social media and journalistic commentary), analyzing 30k+ participant comments, we propose a taxonomy for characterizing diffusion model artifacts in images
April 25, 2025 at 3:15 PM
Scene complexity, artifact types, display time, and human curation of AI-generated images all play significant roles in how accurately people distinguish real and AI-generated images.
April 25, 2025 at 3:15 PM
We examine photorealism in generative AI by measuring people's accuracy at distinguishing 450 AI-generated and 150 real images

Photorealism varies from image to image and person to person

83% of AI-generated images are identified as AI better than random chance would predict
April 25, 2025 at 3:15 PM
💡New paper at #CHI2025 💡

Large scale experiment with 750k obs addressing

(1) How photorealistic are today's AI-generated images?

(2) What features of images influence people's ability to distinguish real/fake?

(3) How should we categorize artifacts?
April 25, 2025 at 3:15 PM
📣 📣 Postdoc Opportunity at Northwestern

Dashun Wang and I are seeking a creative, technical, interdisciplinary researcher for a joint postdoc fellowship between our labs.

If you're passionate about Human-AI Collaboration and Science of Science, this may be for you! 🚀

Please share widely!
April 2, 2025 at 1:00 PM
V2 of the Human and Machine Intelligence 😊🤖🧠 is in the books!

So many fantastic discussions as we witnessed the frontier of AI shift even further into hyperdrive✨

Props to students for all the hard work and big thanks to teaching assistants and guest speakers 🙏
March 20, 2025 at 12:53 AM
What is perception? What do we really see when we look at the world?

And, why does the amodal completion illusion lead us to see a super long reindeer in the image on the right?

This week @chazfirestone.bsky.social joined the NU CogSci seminar series to address these fundamental questions
March 7, 2025 at 4:01 PM
2024 marks the official launch of the Human-AI Collaboration Lab, so I wrote a one page letter to introduce the lab, share highlights, and begin a lab tradition of reflecting on the year and sharing what we're working on in an easy to digest annual letter to share with friends and colleagues.
December 31, 2024 at 6:54 PM
Long before "deepfakes" and AI-generated media created anxiety of whether we can trust images' authenticity, @stewartbrand.bsky.social speculated on the "End of Photography as Evidence"

Re-reading this before joining a panel of lawyers to speak on deepfakes

go.activecalendar.com/FordhamUnive...
November 22, 2024 at 2:15 AM
The ability for thoughtful people to spot AI-generated poetry in a couple seconds vs. the study's participants reveals classic problems inherent to Imitation Game research:

Lack of domain expertise & lack of knowledge of AI's capabilities and limitations -> falling for & even preferring simulacra
November 22, 2024 at 1:31 AM
I'm recruiting a PhD student to join the Human AI Collaboration lab at Kellogg, NU CS, and @nicoatnu.bsky.social

If you're excited about computational social science, LLMs, digital experiments, real-world problem solving, this could be a great fit

Please reshare!

Deets 👇
November 19, 2024 at 9:41 PM
Why does ChatGPT outperform physicians + ChatGPT in this clinical vignette study?

User error seems to be the culprit

If users don't know how to interact with the technology (however easy it may seem), then the experiment misses out on what would happen if participants had basic knowledge LLMs
November 18, 2024 at 3:36 PM