Neil Kirk
drspeakmind.bsky.social
Neil Kirk
@drspeakmind.bsky.social
Reader in Cognitive (and MIND) Psychology with an interest in all things dialect and voice-y. Other interests include: the X-men, AI, the gym and doomscrolling. 🏳️‍🌈
Reposted by Neil Kirk
As synthetic voices become indistinguishable from real ones, DIGIT spoke with psychologist Dr Neil Kirk about how our instinct to trust our own accents could make way for deepfakes targeting not just individuals, but entire regions.

www.digit.fyi/psychology-b...
#deepfakes #AIvoices #AyeRobot
Too Authentic to be Synthetic: The Psychology Behind AI Voice Scams
As AI voices become eerily realistic, new research shows that the instinct to trust our own accents could make us more vulnerable to deepfakes
www.digit.fyi
October 8, 2025 at 10:30 AM
Here's a feature on my recent AI Voice work - really honoured to have been asked to talk about this! www.digit.fyi/psychology-b...
Too Authentic to be Synthetic: The Psychology Behind AI Voice Scams
As AI voices become eerily realistic, new research shows that the instinct to trust our own accents could make us more vulnerable to deepfakes
www.digit.fyi
October 8, 2025 at 10:34 AM
Reposted by Neil Kirk
🧠 Is your mindset making you more vulnerable to AI voice-based deception?

I've got a new pre-print out: osf.io/preprints/ps...

Vigilance towards AI voices can be nudged through a change in MINDSET

Thread below 👇

#AI #Voice #Cybersecurity #Fraud #Psychology 1/11
OSF
osf.io
July 21, 2025 at 3:02 PM
Very grateful to @the-sipr.bsky.social for funding this important work. 11/11
July 21, 2025 at 3:02 PM
💡 Why it matters: This could have real-world implications for designing public awareness campaigns and scam prevention messages. 10/11
July 21, 2025 at 3:02 PM
🏠 Take-Home Message: Simply telling people that AI voices can speak with a Scottish accent/dialect was far more effective than warning them to be vigilant. 9/11
July 21, 2025 at 3:02 PM
However, an explicit vigilance-based nudge warning about the dangers of AI voices and urging listeners “if in doubt, think AI” had no effect, unless paired with the capability message about AI’s linguistic abilities. 8/11
July 21, 2025 at 3:02 PM
A positively framed nudge highlighting AI’s capability to reproduce underrepresented accents and dialects significantly reduced this bias – in other words, changing their MINDSET made them more vigilant towards AI voices using these varieties. 7/11
July 21, 2025 at 3:02 PM
In this manuscript, I investigate whether simple informational nudges can shift these assumptions and reduce the bias for responding “Human”. Across two experiments, participants categorised voices as either Human or AI. 6/11
July 21, 2025 at 3:02 PM
Yet that assumption could be putting some language communities at greater risk of AI voice-based deception if they believe a voice speaking that way must be a real person. 5/11
July 21, 2025 at 3:02 PM
In my new paper, I introduce the concept of MINDSET: Minority, Indigenous, Non-standard, and Dialect-Shaped Expectations of Technology. It reflects the idea that people assume AI can’t convincingly reproduce underrepresented ways of speaking. 4/11
July 21, 2025 at 3:02 PM
I also suspect this is not unique to Scotland, but part of a global pattern affecting communities whose voices have historically been excluded from these systems. 3/11
July 21, 2025 at 3:02 PM
My previous work showed that listeners were more likely to believe an AI voice was a real human when it spoke in a local dialect. I think this happens because we’re not used to speech technology understanding these varieties - never mind speaking them! 2/11
July 21, 2025 at 3:02 PM
🧠 Is your mindset making you more vulnerable to AI voice-based deception?

I've got a new pre-print out: osf.io/preprints/ps...

Vigilance towards AI voices can be nudged through a change in MINDSET

Thread below 👇

#AI #Voice #Cybersecurity #Fraud #Psychology 1/11
OSF
osf.io
July 21, 2025 at 3:02 PM
New achievement unlocked - delivering the oration for Brian Cox. I didn’t faint, nor did he use any of his famous catchphrases on me. Success!
July 12, 2025 at 5:08 PM
Made another wee video about one of our recently published papers…
July 1, 2025 at 3:56 PM
… or take Monday off. (Yes that was me).
June 23, 2025 at 6:52 PM
I’m on TikTok and Instagram @ drspeakmind - please give me a follow and see how well you can spot AI voices! 🗣️
June 18, 2025 at 12:44 PM
“Enjoying” some time off work #ToughMudder
June 9, 2025 at 2:58 PM
Reposted by Neil Kirk
🎓 Fully funded PhD at Abertay University & SIPR: Enabling People to Identify Deepfakes. £20,780/year, fees paid, starts Oct 2025. Use eye-tracking to research deepfake detection. Apply by 30 June 👉 www.abertay.ac.uk/about/workin... #PhD #Cybersecurity #Deepfakes
June 5, 2025 at 6:43 PM
It’s always lovely to receive a “to me, love from me, well done on finishing your marking” present.
June 5, 2025 at 10:15 AM
Reposted by Neil Kirk
🚨New publication... Children's films contain innacurate gender stereotypes and these corresespond to children's (and adult's) implicit and explicit gendered associations.

onlinelibrary.wiley.com/doi/10.1111/...

Read thread for more details...
Mr Predator and Mrs Prey: Gender Stereotypes in Children's Films Correlate With Explicit and Implicit Gender Stereotyping
Children acquire gender stereotypes at a young age and these subsequently influence cognition and behavior. Stereotypes may be learned through a child's direct observation of gender differences as we....
onlinelibrary.wiley.com
May 29, 2025 at 2:12 PM
Reposted by Neil Kirk
This might make some language communities more vulnerable to AI voice-based deception. Luckily I’ve been given some funding to investigate this further, so watch this space!
April 18, 2025 at 6:00 PM
Do they factor self-indulgent puntastic titles into REF scores? I sure hope so! 😄
April 18, 2025 at 6:06 PM