Steph Allan
@eolasinntinn.bsky.social
930 followers 470 following 320 posts
Psychosis Postdoctoral Researcher and Occasional Educator at the University of Glasgow. Currently working on developing support for fears of recurrence. Likes cats, hates injustice. Here to learn. She / they. https://psychosisresearch.info/
Posts Media Videos Starter Packs
Currently on the "justify everything" stage of registered report writing. Justifying "Why Scotland". Why indeed.
Have given myself a challenge of learning how to do a registered report - by doing!

Does anyone have an example of a stage 1 registered report that I could look at, please?
I feel the government want folk with serious mental illness to die. Speaking with my 'serious mental illness patient' hat on. See also whatever is happening with PIP.
Taking away eligibility for the Covid vaccine from almost anyone under 75, including carers, healthcare workers and most clinically vulnerable people, is a national scandal.

www.bbc.com/news/article...
Pharmacies facing angry patients over Covid jab confusion
Up to half of patients coming to some pharmacies are being turned away because they are not eligible.
www.bbc.com
Thanks for sharing my LinkedIn post. Appreciate the working class solidarity and as someone doing mental health research in Glasgow, excited to read it and share with colleagues when it’s ready.
Reposted by Steph Allan
The thing to know about Trump's autism and paracetamol claims is that the cruelty is the point. They want women in pain and disabled people stigmatised. And they enjoy showing they can lie outrageously without accountability. I worry that debating the science is to play their game, by their rules.
Reposted by Steph Allan
In our study, led by the amazing Taylor Burns, we found that masking of autistic traits may leave autistic people vulnerable to identity distress which in turn effects mental health. This means constantly adapting to neurotypical norms may make it hard for autistic people to hold on to who they are
New research involving @drmbothapsych.bsky.social in our @durhampsych.bsky.social finds that identity distress (where someone has difficulty forming a cohesive sense of identity) is at the heart of higher rates of poor mental health experienced by autistic people. Find out more 👉 bit.ly/3KMtz88
Reposted by Steph Allan
Calling all UK Qualitative researchers working with Health-related Trials! We’d love your insights in a survey exploring your experience of the pace and timing of the qualitative research, as well as your experience of working with the trial team. 🔗 uofg.qualtrics.com/jfe/form/SV_...
Reposted by Steph Allan
On Saturday 20th September, Byres Hub will be open for tours and the chance to sit for a live portrait by Project Ability artists.

You can visit the exhibition area and go up to our room with a view to look out across St Mungo Square, the University's Western Campus and the south of Glasgow.
Images of 4 portraits from "I Am Here" exhibition
Reposted by Steph Allan
Research opportunity (UK only)! Understanding experiences of re-accessing secondary mental health services following discharge from Early Intervention in Psychosis (EIP) services.

We want to hear from you.

tinyurl.com/y2ku4e4y
Reposted by Steph Allan
ChatGPT was only pretending to write a book?
Wow, it really is human.
This is fascinating: www.reddit.com/r/OpenAI/s/I...

Someone “worked on a book with ChatGPT” for weeks and then sought help on Reddit when they couldn’t download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/book…
From the OpenAI community on Reddit
Explore this post and more from the OpenAI community
www.reddit.com
Reposted by Steph Allan
“correlations with human output mean little to substantiate claims of human-likeness, especially when the input to the AI models tested is the output of human cognition in the first place”

A truly incredible piece, so many amazing quotes. Well worth the read and a much needed counter.
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Steph Allan
A summary of links around the study: "Exploring Participant-Generated Examples of Social Change".

Short video on the findings: www.youtube.com/watch?v=ID5m...
University press release: www.uwe.ac.uk/news/new-soc...
Full article (open access - free for anyone to download): doi.org/10.1002/jcop...
Exploring Participant-Generated Examples of Social Change
YouTube video by Miles Thompson
www.youtube.com
Reposted by Steph Allan
samuel mehr @mehr.nz · Aug 27
There's a lawsuit about AI stealing your work. It's the same lawyers taking on Elsevier et al in a separate case.

Academics:
1. Check if your work is in LibGen at www.theatlantic.com/technology/a...

2. If so, let the lawyers know at www.lieffcabraser.com/anthropic-au...
There are tons of graphic novels, academic papers, film and TV scripts, & prose novels/nonfiction on the LibGen list Anthropic used.

As settlement approaches, make it easy for the class action lawyers to contact you! Here’s how

Part 1: is your work in Libgen?

www.theatlantic.com/technology/a...
Search LibGen, the Pirated-Books Database That Meta Used to Train AI
Millions of books and scientific papers are captured in the collection’s current iteration.
www.theatlantic.com
I'm too old for the TikTok.
Can confirm. Knew you offline too!
Grateful to be with perinatal team right now. Dead to think what be like otherwise…