Meredith Whittaker
banner
meredithmeredith.bsky.social
Meredith Whittaker
@meredithmeredith.bsky.social
President of Signal, Chief Advisor to AI Now Institute
LOL all morning picturing some finance quant slouched at his desk prompting Dalle, like, "make me a picture of pikachu having sex with the Apple logo" or whatever tf those guys use these things for, then throwing up his hands like "WHY WON'T THIS FIND ALPHA?!"

www.bloomberg.com/news/article...
October 16, 2025 at 12:39 PM
New speakers. I'm officially a dad.
September 23, 2025 at 8:42 AM
📣 NEW -- In The Economist, discussing the privacy perils of AI agents and what AI companies and operating systems need to do--NOW--to protect Signal and much else!

www.economist.com/by-invitatio...
September 9, 2025 at 11:44 AM
New Sunday Times profile in which I succeed, like a fencer in a 2hr marathon match, in fending off Qs abt my personal life & consistently turning focus back to my work & ideas.

(Contra the interviewer's claim, many ppl do know me! They're called friends, & you know who you are ❤️)

archive.is/BtZD0
August 11, 2025 at 3:46 PM
July 25, 2025 at 2:52 PM
Just saying...
June 30, 2025 at 1:09 PM
'Meredith,' some guys ask, 'why won't you shove AI into Signal?'

Because we love privacy, and we love you, and this shit is predictable and unacceptable. Use Signal ❤️
June 19, 2025 at 7:59 AM
Use Signal. We promise, no AI clutter, and no surveillance ads, whatever the rest of the industry does. <3
June 16, 2025 at 3:30 PM
Tomorrow, Thur May 29, Nuits sonores, Lyon France!

I'm coming dance, I'm coming to party, I'm coming to eat, but first I'm sitting down to talk about tech, privacy, Signal, and what it takes to make a world worth living in <3
May 28, 2025 at 9:43 AM
Shot shot, shot missed
March 1, 2025 at 12:33 AM
February 19, 2025 at 4:42 PM
Nick Cage, hi, call me 🫶

variety.com/2025/film/ne...
February 6, 2025 at 2:24 PM
Thinking about this classic recently, which is available free here:

monoskop.org/images/d/df/...
February 4, 2025 at 9:19 AM
New Year's resolutions ✨
January 2, 2025 at 7:02 PM
🥧💙
November 28, 2024 at 2:44 PM
This gets more and more interesting the deeper in I go...
November 23, 2024 at 4:07 PM
Anyone spending the brief liminal window between Christmas and Georgian new year in Hamburg, with the hackers, at CCC?

I am! And I'll be presenting new research and a new talk ☺️
halfnarp.events.ccc.de
November 18, 2024 at 8:28 AM
Opened a years old doc called "data" and found a poem
November 17, 2024 at 1:44 PM
New Paper! w/@HeidyKhlaaf + @sarahbmyers. We put the narrative on AI risks & nat'l security under a microscope, finding that the focus on hypothetical AI bioweapons is warping policy and ignoring the real & serious harms of current AI use in surveillance, targeting, etc. 1/
October 22, 2024 at 3:50 PM
Difficult to take the Big Tech's concerns about "AI disinfo" seriously when...

...behold, Google Gemini!

(the non answer re Palestine would need to be intentionally built into the system, as would the "double checked" cert-via-highlighting of the text that de-maps Palestine.)
March 4, 2024 at 8:59 PM
If I wanted court drama I'd read Stendhal, who understood how power works and spent hundreds of pages illuminating characters whose desire for it blinded them to this reality.

Or, it was Microsoft all along...
November 20, 2023 at 6:48 PM
Someone shared a writing contract where the outlet says they can detect AI generated writing. Ofc there's no reliable way to do this.

Bespoke AI guardrails amounting to a parent telling a kid "I can always tell when you lie" hoping credulity & fear lead them never to test it.
September 9, 2023 at 12:49 PM
📢NEW PAPER!

Where @davidthewid, @sarahbmyers & I unpack what Open Source AI even is.

We find that the terms ‘open’ & ‘open source’ are often more marketing than technical descriptor, and that even the most 'open' systems don't alone democratize AI 1/

papers.ssrn.com/sol3/papers.cf…
August 17, 2023 at 8:09 PM
Ordinary harms (like replicating and naturalizing structures of marginalization that entrench historical inequality) can be outweighed by ordinary benefits (like tech guys getting rich). What Hinton is worried about is existential risk (ghost stories).
May 3, 2023 at 12:55 AM
This is such a powerful example. And TBH one of the best ways to "regulate" AI: organized workers demanding dignified and safe working conditions, rejecting the idea that AI-enabled degradation of work is inevitable!
May 2, 2023 at 1:12 PM