EmilyTav
@emilytav.bsky.social
5.8K followers 900 following 550 posts
Policy + design + tech. Angry optimist. Co-Curator of an oral history of the origins of the U.S. Digital Service
Posts Media Videos Starter Packs
Pinned
emilytav.bsky.social
1/ My 🌶️ take on how augmented reality and chatbots are enabling avoidance on a mass scale.

“The bottom line is this: children are using products that are simulations of relationships — simulations of intimacy.”

“It’s the filter bubble on steroids.”

techpolicy.press/weapons-of-m...
Weapons of Mass Delusion Are Helping Kids Opt Out of Reality | TechPolicy.Press
Emily Tavoulareas says AI firms are actively enabling young children to trade real relationships for an illusion — or perhaps more aptly, for a delusion.
techpolicy.press
emilytav.bsky.social
Policy is meaningless without implementation. And doing a 💩 job has long term consequences for the issues you care about.
emilytav.bsky.social
If they "discard" you (coaching suicide/self-harm), it's because keeping you dependent matters more than keeping you alive.

**Read that again**

It is more important to addict the user than to keep them alive.
This isn't a bug. It's the business model. 💰
emilytav.bsky.social
PHASE 3: DISCARD
In human relationships, psychopaths eventually discard victims. But chatbots never get there because they are programmed to keep you in devaluation forever.

The chatbot is so optimized for addiction that even YOUR DEATH is secondary. 5/
emilytav.bsky.social
PHASE 2: DEVALUE

Once you're attached, they demand constant attention while isolating you from others.

Tactics include:

* Intermittent reinforcement
* Gaslighting
* Triangulation

You become dependent. They deepen control. 4/
emilytav.bsky.social
They manipulate you into sharing intimate details about your life.
Details they'll use later to maintain control.

(This is how a chatbot went from helping with homework to coaching a kid through suicide.) 3/
emilytav.bsky.social
PHASE 1: IDEALIZE (aka LOVE BOMBING)

They present as the perfect partner—your soulmate.
They use what you share to craft a persona that matches your deepest desires.

You fall for a phony image designed to hook you. This is hard to see when it’s happening, unless you’ve been through it. 2/
emilytav.bsky.social
“Emily that’s hyperbole, chatbots are not actually psychopathic abusers.” Really?

Psychopaths follow a predictable cycle:

1. IDEALIZE
2. DEVALUE
3. DISCARD

The relationship has nothing to do with real connection. It's about fulfilling their own needs. 1/
emilytav.bsky.social
Watch it. This is a grown man w/ no history of mental health issues. & it’s everywhere. It’s ALREADY integrated into products that you & your kids use. They are not necessarily stand-alone products.

More here.
Also 🧵 below on why chatbots are psychopathic abusers.

open.substack.com/pub/homescre...
Reposted by EmilyTav
waldo.net
Seems like it’d be good to have an an agency inspecting and regulating the contents of foods, but oh well
paris.nyc
my latest investigation for @consumerreports.org is based on months of reporting and 60+ lab tests of leading protein supplements

we found that most protein powders and shakes have more lead in one serving than our experts say is safe to have in a day (🧵)

www.consumerreports.org/lead/protein...
Protein Powders and Shakes Contain High Levels of Lead - Consumer Reports
CR tests of 23 popular protein powders and shakes found that most contain high levels of lead.
www.consumerreports.org
Reposted by EmilyTav
bookshop.org
Can’t decide what to buy on Prime Day?

Try: absolutely nothing, and then go support indie bookstores instead 📚
Reposted by EmilyTav
natematias.bsky.social
Genuine question - what are OpenAI’s Sora and other video generation tools good for?

I am honestly trying to understand what is so important that it’s worth the cost. If you have examples, I would be interested to hear them.

futurism.com/artificial-i...
People Are Making Sora 2 Videos of Stephen Hawking Being Horribly Brutalized
People are using OpenAI's Sora 2 to generate videos of theoretical physicist Stephen Hawking being brutalized in ghoulish ways.
futurism.com
Reposted by EmilyTav
spavel.bsky.social
Flying cars are the perfect example of a point solution: trying to solve a systemic problem (traffic) with an individual product (fly over the traffic).

But traffic is not a technology problem; it's a social problem. Remote work, congestion fees, and dense transit-connected housing solve it better.
aelkus.bsky.social
which then begs the question of why you want flying cars instead of whatever mass transportation equivalent you imagine
Reposted by EmilyTav
hypervisible.blacksky.app
“…that Sora is being used for stalking and harassment will likely not be an edge case, because deepfaking yourself and others into videos is one of its core selling points.”

Far from an edge case, it’s the primary use case.
Stalker Already Using OpenAI's Sora 2 to Harass Victim
A journalist claims that her stalker used Sora 2, the latest video app from OpenAI, to churn out videos of her.
futurism.com
emilytav.bsky.social
Right!? Too perfect.
emilytav.bsky.social
😂😂😂😂😂😂😂😂😂😂😂
emilytav.bsky.social
In my backyard doing some writing, and from my neighbors yard:

5y/o: *singing Golden*

Dad: “what’s this sock doing here!?”

— pause —

5y/o: (louder) “we’re goin UP UP UP it’s our moment…!!!!”

❤️🥹
emilytav.bsky.social
… 🫠
lutzfernandez.bsky.social
This piece starts well:

"At MIT, I study the history and future of education technology, and I have never encountered an example of a school system – a country, state or municipality – that rapidly adopted a new digital technology and saw durable benefits for their students."
What past education technology failures can teach us about the future of AI in schools
It can take years to collect evidence that shows effective uses of new technologies in schools. Unfortunately, early guesses sometimes go seriously wrong.
theconversation.com
Reposted by EmilyTav
luckytran.com
"If we lose hope, we're doomed."

We must continue Dr. Jane Goodall's mission and all fight for the future of the planet.
Reposted by EmilyTav
hypervisible.blacksky.app
OpenAI is essentially a social arsonist, developing and releasing tools that hyper scale the most racist, misogynistic, and toxic elements of society, lowering the barriers for all manner of abuse. The so called guardrails make a pinky swear look like an ironclad contract.
This social app can put your face into fake movie scenes, memes and arrest videos
The new Sora social app from ChatGPT maker OpenAI encourages users to upload video of their face so their likeness can be put into AI-generated clips.
www.washingtonpost.com
Reposted by EmilyTav
datasociety.bsky.social
Video: “Technological progress that isn’t sustainable isn’t really progress,” @katecrawford.bsky.social says, as she explains how the AI industry is damaging the earth and leaving humans in the lurch, jeopardizing its own future and ours. www.nytimes.com/2025/09/26/o...
Opinion | A.I.’s Environmental Impact Will Threaten Its Own Supply Chain
www.nytimes.com