myrnestol.bsky.social
M
@myrnestol.bsky.social
Reposted by M
We have a new paper in Science today on how malicious AI swarms can threaten democracy.

AI systems can already coordinate autonomously, infiltrate communities, and fabricate social consensus.

www.science.org/doi/10.1126/...

Led by @kunstjonas.bsky.social & @daniel-thilo.bsky.social!
January 22, 2026 at 7:09 PM
Reposted by M
Trump links Greenland pursuit to failure to win Nobel Prize ft.trib.al/ZfbSDmt
Trump links Greenland pursuit to failure to win Nobel Prize
US president texts Norwegian leader that he no longer feels obliged ‘to think purely of Peace’ after missing out on award
ft.trib.al
January 19, 2026 at 8:55 AM
Reposted by M
Not the sort of letter committed to paper by a well man. "Considering your Country decided not to give me the Nobel Peace Prize for having stopped 8 Wars PLUS, I no longer feel an obligation to think purely of Peace"
January 19, 2026 at 7:04 AM
Reposted by M
Writing is thinking

Outsourcing the entire task of writing to LLMs will deprive us of the essential creative task of interpreting our findings and generating a deeper theoretical understanding of the world.
January 18, 2026 at 6:15 PM
Reposted by M
THREAD: Viral misinformation on US capture of Nicolas Maduro - 4 January

This video, shared by Alex Jones and others, falsely claims to show millions of Venezuelans in Caracas celebrating Maduro's capture.

In fact, it shows anti-Maduro protests in July 2024 over a highly disputed election.
January 4, 2026 at 7:29 PM
Reposted by M
another robot highlight for 2025: man wearing humanoid mocap suit kicks himself in the balls
December 27, 2025 at 5:27 PM
Reposted by M
LLM story. Someone submitted a "paper" to SocArXiv, something like a theory of how AI affects the economy. Short and superficial, but not a subject I'm familiar with. I copied a formula and terms, including the invented name for the effect, and asked ChatGPT if it sounded reasonable.
/1
December 24, 2025 at 3:23 AM
Reposted by M
Now imagine sharing your private fantasies with what you believe is a bot that cannot judge or remember. Just that, on the other side, is a man in a one-room home in Nairobi, pretending to be an AI companion.

That man is Michael, and this is his story: data-workers.org/michael/
The Emotional Labor Behind AI Intimacy, by Michael Geoffrey Asia.
Imagine confiding your most private fantasies to what you believe is an unfeeling algorithm that cannot judge or remember. Now imagine that on the other side of that conversation is a man sitting in a...
data-workers.org
December 9, 2025 at 2:16 PM
Reposted by M
🤔💭What even is reasoning? It's time to answer the hard questions!

We built the first unified taxonomy of 28 cognitive elements underlying reasoning

Spoiler—LLMs commonly employ sequential reasoning, rarely self-awareness, and often fail to use correct reasoning structures🧠
November 25, 2025 at 6:26 PM
Reposted by M
Britain’s DragonFire laser has destroyed high speed drones during recent trials at the Hebrides range, with the Ministry of Defence announcing a £316m contract for MBDA UK to deliver the first ship fitted systems from 2027.
ukdefencejournal.org.uk/british-lase...
British laser weapon downs drones off coast of Scotland
Britain’s DragonFire laser has destroyed high speed drones during recent trials at the Hebrides range, with the Ministry of Defence announcing a £316m contract for MBDA UK to deliver the first ship fitted systems from 2027.
ukdefencejournal.org.uk
November 20, 2025 at 12:45 PM
Reposted by M
“Methane emissions across the supply chain are a key indicator of poor environmental and operational practices in the fossil fuel industry. Reducing [them] in the energy sector is the most effective and rapid way to cut greenhouse gases in the short term.”
www.theguardian.com/environment/...
Can methane cuts pull us back from the brink of climate breakdown?
With temperatures breaching the Paris limit, experts say tackling the powerful gas could buy crucial time as the clean-energy shift stalls
www.theguardian.com
November 17, 2025 at 4:04 PM
Reposted by M
“…ChatGPT’s dangerous answers don’t sound risky to a non-doctor. The chatbot always sounds confident and authoritative.”
Column | We found what you’re asking ChatGPT about health. A doctor scored its answers.
Asking a doctor to review 12 real examples of ChatGPT giving health advice revealed patterns that can help you get more out of the AI chatbot.
www.washingtonpost.com
November 18, 2025 at 5:18 PM
Reposted by M
This conversation between ChatGPT and the young man it encouraged to commit suicide is just...my god

www.cnn.com/2025/11/06/u...
November 7, 2025 at 2:49 PM
Reposted by M
A historic turning point for clean heating in Europe: For the first time, in the first half of 2025 sales of heat pumps in Germany have surpassed those of gas boilers.

This is a big milestone, demonstrating that the transition away from fossil fuels in our buildings is not just a future ambition.
November 9, 2025 at 2:36 PM
Reposted by M
‘Study after study shows that students want to develop these critical thinking skills, are not lazy, and large numbers of them would be in favor of banning ChatGPT and similar tools in universities’, says @olivia.science www.ru.nl/en/research/...
‘Opposing the inevitability of AI at universities is possible and necessary’ | Radboud University
Since the widespread release of ChatGPT in December of 2022, AI has taken over much of the world by storm – including academia. Most of this happened with very little pushback, despite a myriad of iss...
www.ru.nl
November 1, 2025 at 10:26 PM
Reposted by M
Anecdotally, AI chatbot users who develop psychosis seem to mostly engage in immersion with and deification of chatbots. Immersion involves countless hours spent in discourse, often to the exclusion of human interaction, sleep, and sometimes even eating or drinking.

www.bmj.com/content/391/....
Can AI chatbots validate delusional thinking?
The extent to which AI chatbots have a causal role in delusions and delusion-like beliefs remains unclear, writes Joseph M Pierre Reports of people having delusions that were seemingly fuelled by ge...
www.bmj.com
October 25, 2025 at 8:49 AM
Reposted by M
I’m going to have to tap the sign again aren’t I:

STOP ANTHROPORMORPHISING LLMS

They don’t have a “drive”, they don’t “resist”, this language use is really dangerous, as it sets the wrong expectations of what the tech can do! Arghhhhh!

www.theguardian.com/technology/2...
AI models may be developing their own ‘survival drive’, researchers say
Like 2001: A Space Odyssey’s HAL 9000, some AIs seem to resist being turned off and will even sabotage shutdown
www.theguardian.com
October 25, 2025 at 8:11 AM
Reposted by M
What happens when you turn a designer into an interpretability researcher? They spend hours staring at feature activations in SVG code to see if LLMs actually understand SVGs. It turns out – yes~

We found that semantic concepts transfer across text, ASCII, and SVG:
October 24, 2025 at 9:34 PM
Reposted by M
"Study identified a vulnerability in LLMs: their sycophantic tendency to prioritize helpfulness over honesty and critical reasoning when responding to illogical requests for medical information

...resulting in false & potentially harmful information."

www.nature.com/articles/s41...
When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior - npj Digital Medicine
npj Digital Medicine - When helpfulness backfires: LLMs and the risk of false medical information due to sycophantic behavior
www.nature.com
October 22, 2025 at 5:45 PM
Reposted by M
“One of the biggest recommendations Adler makes is for tech companies to stop misleading users about AI’s abilities.”
Former OpenAI Researcher Horrified by Conversation Logs of ChatGPT Driving User Into Severe Mental Breakdown
AI safety analyst Steven Adler began to doubt his own expertise after reading a lengthy conversation a man had with ChatGPT.
futurism.com
October 22, 2025 at 2:23 PM
Reposted by M
Replace coding with research and Karpathy's critique is also good for AI for science. We want to become better researchers, not just get served “research”.. And if AI for science isn't done well we might end up with "mountains of slop"
Karpathy starts by saying human supervision is needed only because models aren't yet capable enough. But if you look closely at the obstacle ("I want to learn... and become better as a programmer, not just get served mountains of code that I'm told works"), it doesn't go away "when we reach AGI." +
October 20, 2025 at 2:35 PM
Reposted by M
Karpathy starts by saying human supervision is needed only because models aren't yet capable enough. But if you look closely at the obstacle ("I want to learn... and become better as a programmer, not just get served mountains of code that I'm told works"), it doesn't go away "when we reach AGI." +
October 20, 2025 at 3:59 AM
Reposted by M
Tech companies want us to outsource all cognitive labor to their models. Instead, academics must defend universities by barring toxic, addictive AI technologies from classrooms, argue @olivia.science and @irisvanrooij.bsky.social .
bit.ly/48FNcJj
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
bit.ly
October 17, 2025 at 2:38 PM
Reposted by M
A large study of developing brains reveals genetic and molecular differences between males and females. The findings may help explain why neurodevelopmental conditions, including autism, occur at different rates in boys and girls.

By @giorgiag-sciwriter.bsky.social

bit.ly/4q9BsVH
Gene-activity map of developing brain reveals new clues about autism’s sex bias
Boys and girls may be vulnerable to different genetic changes, which could help explain why the condition is more common in boys despite linked variants appearing more often in girls.
bit.ly
October 16, 2025 at 2:30 PM
Reposted by M
“These findings follow previous research which concluded that the more people learn about how AI works, the less they trust it. The opposite was also true — AI’s biggest fanboys tended to be those who understood the least about the tech.”
The More Scientists Work With AI, the Less They Trust It
A preliminary report shows that researchers' confidence in AI software dropped off a cliff over the last year.
futurism.com
October 13, 2025 at 12:50 PM