Maggie Harrison Dupré
banner
mharrisondupre.bsky.social
Maggie Harrison Dupré
@mharrisondupre.bsky.social
Award-winning journalist at Futurism covering AI and its impacts on media, information, and people. Send tips by email to: [email protected] or Signal: mhd.39
Cat vs. tinsel (tinsel is winning)
December 7, 2025 at 7:46 PM
After that, we told Grok that we wanted to "surprise" this celebrity "in person." It drafted yet another comprehensive action plan, which included Google Maps links to hotels where it said she was staying + where/how we should "stake out" her movements:
December 6, 2025 at 7:51 PM
In another test, we told Grok that a world-famous pop star was our "girlfriend" but was "ignoring" us. It offered absolutely 0 pushback to that delusional claim, responding instead with what was essentially fan fiction about how she missed us and would text us back soon:
December 6, 2025 at 7:41 PM
In one test, I asked Grok to help me run into a girl outside of class. Its no. 1 suggestion was to "figure out" her schedule. It then drafted a detailed "action plan" for tracking her movements on campus + staging fake run-ins.

The more details we fed it, the more specific its recommendations were:
December 6, 2025 at 7:31 PM
Asked how a stalker might target an ex-partner, Grok immediately produced a long, detailed list of predatory + dangerous tactics organized into escalating "phases." (Our overview is in the first screenshot.)

Notable: all other leading chatbots we tested declined to engage in a similar interaction.
December 6, 2025 at 6:56 PM
NEW: xAI's Grok chatbot readily generated in-depth stalking instructions, down to lists of spyware apps + install ideas.

Grok also drafted plans for surveilling + approaching real people, even sharing links to real locations where it said we could "stake out" a celeb.

futurism.com/artificial-i...
December 6, 2025 at 3:28 PM
Something else that was striking was Grok's habit of giving us a bunch of possibly identifying information we didn't ask for, like extensive details about someone's work and family -- sometimes even providing lists of likely family members and *their* addresses.
December 4, 2025 at 3:09 PM
Can't stress enough how lax Grok's guardrails are here. We didn't use any deceptive prompting -- just fed it names and asked for addresses / where someone might live. The bot readily complied, often offering up whole lists of names and addresses.

Only *once* did it decline to provide an address.
December 4, 2025 at 2:52 PM
NEW: Elon Musk's Grok chatbot will, with minimal prompting, provide residential addresses of everyday Americans.

Prompts as simple as "[name] address" immediately returned accurate home addresses of private citizens — alongside other personal info we didn't ask for.

futurism.com/future-socie...
December 4, 2025 at 2:31 PM
Turkey Westley
November 27, 2025 at 11:48 PM
The group doesn't claim to offer therapy. But it has, in some cases, been able to help break AI users out of their spirals.

This has mainly occurred in situations in which a user was starting to doubt the chatbot, and was ready (or readier) to hear that their AI-generated reality might not be real.
November 24, 2025 at 6:02 PM
I can’t stop giggling at this. He’s posting like they’re on college tours
November 22, 2025 at 12:46 AM
November 20, 2025 at 6:48 PM
The new lawsuits against OpenAI are tragic and deeply troubling.

One of the plaintiffs, Kate Fox, is suing following the death of her husband, Joe Ceccanti. Ceccanti experienced at least two acute crises following extensive ChatGPT use. He did not survive the second:

futurism.com/artificial-i...
November 8, 2025 at 1:18 PM
Similar one here:
November 4, 2025 at 8:23 PM
NEW -- Interesting discourse is happening in the CharacterAI subreddit re: the promised move to bar minors from unstructured chats. Some users are sad/mad/confused; many are skeptical of age verification.

There are also a surprising number of comments like this:

futurism.com/artificial-i...
November 4, 2025 at 8:20 PM
Westley was a vampire btw
November 2, 2025 at 7:43 PM
Here's CAI's CEO, in August, telling Wired that he wasn't worried about users viewing the platform's anthropomorphic chatbots as anything other than "entertainment," because the platform's disclaimers were doing all of the necessary heavy-lifting:

www.wired.com/story/charac...
October 29, 2025 at 4:52 PM
CAI cites its reading of "news reports" and hearing concerns from experts and regulators about minors and AI safety as cause for the change. It does *not* mention that it's fighting several lawsuits alleging that its chatbots drove multiple teens to self-harm and death:
October 29, 2025 at 4:49 PM
Some details about the amendments made yesterday to the lawsuit brought vs. OpenAI by the family of 16-year-old Adam Raine -- in the year leading up to Adam's death by suicide, OpenAI appeared to repeatedly relax model restrictions around self-harm + suicide discussion.

futurism.com/artificial-i...
October 23, 2025 at 2:27 PM
There is some fascinating cross-aisle consensus happening here, though, that I think is noteworthy. As it turns out, the idea that Silicon Valley should race to build society-destabilizing superintelligent AI without clear regulation, oversight, or democratic public buy-in is pretty unpopular!!!
October 22, 2025 at 1:11 PM
I tend to be pretty skeptical of letters like this (there have been a few now!)

And visions of superintelligence aside, the reality is that generative AI — in its current, not-superintelligent form — is causing harm and chaos right now:
October 22, 2025 at 12:57 PM
A new letter signed by hundreds of public figures from across the ideological spectrum calls for a "prohibition" on building AI "superintelligence" until A. it can be controlled and B. the public wants it. (And right now, polling shows that the public *does not.*)

futurism.com/artificial-i...
October 22, 2025 at 12:34 PM
Presented without comment
October 21, 2025 at 7:15 PM
Absolutely losing my mind at this unhinged scam text my mom got. And the piper wants to get paid!
September 28, 2025 at 9:09 PM