Psst.org
banner
psst-org.bsky.social
Psst.org
@psst-org.bsky.social
Helping people in tech keep the public informed. Concerned about something you're seeing at work? You don't have to go public:

🔐 Save it in the Psst Safe
👀 We'll help you take it from there

www.psst.org
Pinned
Psst.org featured in @wired.com today.

Tech workers: If you are seeing something and wondering if you should say something (or just need a gut-check or legal advice), read this article, and pass it on!

You aren't alone, and we can help you build strength in numbers.

by @vickiturk.bsky.social
Chatbots are driven to tell users what they want to hear and encourage continued use. In this scenario, that drive could be deadly.
Giving medical advice, sometimes ChatGPT is right, and sometimes it is quite wrong, but it is hard to tell the difference:

"Rather, Wachter identified something more frightening: ChatGPT’s dangerous answers don’t sound risky to a non-doctor. The chatbot always sounds confident and authoritative."
Column | We found what you’re asking ChatGPT about health. A doctor scored its answers.
Asking a doctor to review 12 real examples of ChatGPT giving health advice revealed patterns that can help you get more out of the AI chatbot.
www.washingtonpost.com
November 18, 2025 at 2:06 PM
While AI CEOs tell us the tech will help humanity flourish, the industry’s own workers are subjected to mass layoffs, long hours, and random pay cuts.

Always pay attention to insiders first.
November 17, 2025 at 7:12 PM
Companies aren’t necessarily firing workers *because* AI can do their jobs — they may be using AI as a convenient story to justify cuts. Is this “AI-washing"? Insiders still at the company will be able to say as time goes on.
How is AI *really* impacting jobs?

Henley Chiu, the CTO of Revealera, a jobs data analysis firm, analyzed 180 million jobs listings in 2024 and 2025, in an effort to find out. Chiu found an:

-8% drop in all jobs postings
-~30% drop in art, photography, writing jobs
-22% drop in journalism jobs
What’s really going on with AI and jobs?
Record-breaking layoff reports, Amazon's mass firings, and a slump in entry level employment. Is AI behind it all?
www.bloodinthemachine.com
November 14, 2025 at 6:52 PM
🔔 New: Rowan Philp's piece for @gijn.org discusses how we’re collectivizing the act of whistleblowing. Raising red flags shouldn’t have to be a full-on hero’s journey. 🚩 That’s why we offer a secure way for tech/AI workers to flag a concern.

Read the full article here: gijn.org/stories/new-...
New Tools to Reduce the Risks for Whistleblowers
Two new digital platforms seek to solve many of the problems and vulnerabilities that prevent whistleblowers from coming forward.
gijn.org
November 13, 2025 at 8:52 PM
October was the worst month of layoffs tech workers have seen in decades.

If you or someone you know were laid off and want to speak about what you’ve seen behind the scenes at your company, we've helped lots of workers with free advice/support. You don’t need to “go public" to raise the alarm.
November 13, 2025 at 8:22 PM
Reposted by Psst.org
💫 NEW! New tools are tackling one of whistleblowing’s biggest barriers – fear of going first.

Platforms like @psst-org.bsky.social offer encrypted “safes” for small disclosures, legal support, and even match employees with others who share their concerns.

🔗
gijn.org
New Tools to Reduce the Risks for Whistleblowers
twp.ai
November 12, 2025 at 4:12 PM
No one knows more about AI and its risks than those formerly on the inside. 👇👇👇
November 12, 2025 at 3:37 PM
Explosive story from @matteowong.bsky.social tracks OpenAI's legal shift into aggressively attacking its critics. No one is off limits - not even parents who allege they lost their children's lives because of interactions they've had with ChatGPT.
November 11, 2025 at 5:23 PM
NEW: EU officials consider gutting world-leading privacy laws to placate the AI industry.

Changes would make it so AI companies can access previously special categories of data (religious beliefs, political beliefs, health info) to train the tech.
November 11, 2025 at 3:55 PM
These are rogue products with no guardrails and no regulation. We need transparency and accountability now, and protections for insiders who are seeing this play out who can warn the public faster.
This conversation between ChatGPT and the young man it encouraged to commit suicide is just...my god

www.cnn.com/2025/11/06/u...
November 10, 2025 at 3:25 PM
📣 New series alert: @knightgtown.bsky.social + @techpolicypress.bsky.social teamed up to unpack the state of access to public platform data.

➡️ Why this is needed: public data is driving the AI gold rush, but there’s no collective framework to use it for research in the public interest.
November 7, 2025 at 3:18 PM
New reports show that Meta generates *10%* of its revenue from scam ads.

The good news? Two former Meta staffers have teamed up to launch a nonprofit aimed at fighting the problem. 👇
www.wired.com/story/scam-a...
Scam Ads Are Flooding Social Media. These Former Meta Staffers Have a Plan
Rob Leathern and Rob Goldman, who both worked at Meta, are launching a new nonprofit that aims to bring transparency to an increasingly opaque, scam-filled social media ecosystem.
www.wired.com
November 6, 2025 at 1:26 PM
Web crawlers accepting massive “donations” from AI companies to look the other way when privacy and use rules are violated is…not good!

And P.S., the robots are in fact *not* people. 🤖
NEW: Common Crawl, the massive archiver of the web, has gotten cozy with AI companies and is providing paywalled articles for training data. They’re also lying to publishers who have asked for material to be removed. “The robots are people too,” CC’s exec director told us when we asked about this.
The Nonprofit Feeding the Entire Internet to AI Companies
Common Crawl claims to provide a public benefit, but it lies to publishers about its activities.
www.theatlantic.com
November 5, 2025 at 9:21 PM
Transparency in AI will save lives.
OpenAI released initial estimates about the share of users who may be experiencing symptoms like delusional thinking, mania, or suicidal ideation, and says it has tweaked GPT-5 to respond more effectively.
OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week
OpenAI says hundreds of thousands of ChatGPT users may show signs of manic or psychotic crisis every week.
wrd.cm
November 4, 2025 at 7:11 PM
Thanks to corporate loopholes, AI giants disclose less than their public peers about their financials and business operations.🧐

But what if we governed AI like a market, not a Messiah?

@ilan-strauss.bsky.social + @timoreilly.bsky.social share what could happen ⬇️
techpolicy.press/ai-isnt-a-su...
AI Isn’t a Superintelligence. It's a Market in Need of Disclosure. | TechPolicy.Press
If AI is going to be governed as a market technology, it must be brought into the market’s accountability machinery, write Dr. Ilan Strauss and Tim O'Reilly.
techpolicy.press
October 31, 2025 at 6:44 PM
If you were impacted by Amazon layoffs & want to speak about what you’ve seen behind the scenes, we can help.

We’ve helped lots of tech workers with free legal advice/support if they're concerned about something in their current or former workplace. You don’t need to “go public.” Psst.org/safe
Safe — Psst.org
Psst.org
October 30, 2025 at 3:20 PM
Reposted by Psst.org
"OpenAI said on Tuesday that it had adopted a new for-profit structure, a long-sought change that could allow the business to operate like a more traditional company while it raises the billions of dollars it needs to develop artificial intelligence."
OpenAI Restructures to Become a More Traditional For-Profit Company
www.nytimes.com
October 28, 2025 at 2:33 PM
It's not everyday that we get a bombshell testimony from someone formerly inside the most influential AI company out there - but today, we do.

If you read one thing today, make it Steven Adler's piece on OpenAI's failure to prove erotic AI use is safe. 👇
www.nytimes.com/2025/10/28/o...
Opinion | I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’
www.nytimes.com
October 28, 2025 at 8:17 PM
We rarely talk about the price of telling the truth.

@katekenny.bsky.social breaks down the moral math of whistleblowing in a new interview with yours truly.

The question isn’t why people stay silent; it's why doing the right thing still costs so much.
October 28, 2025 at 7:37 PM
We’re proud of our board member Mark MacGann, profiled in FT Magazine this weekend, who has now blown the whistle a 2nd time.

This is one of the best accounts we’ve read of transformation from corporate insider to whistleblower, and reminds us why strong protections + corporate transparency matter.
October 28, 2025 at 5:38 PM
There are so many layers to this statement. 🫠

But, let's play "Guess which department Meta is eliminating human roles from?" trivia...

Answer in comments. ⬇️
October 27, 2025 at 3:29 PM
Psst, if you were laid off and want to speak about what you’ve seen behind the scenes at Meta, we can help.

We’ve helped lots of workers with free legal advice/support if they’re concerned about something in their current or former workplace. You don’t need to “go public.” Psst.org/safe
Safe — Psst.org
Psst.org
October 23, 2025 at 12:45 PM
Meta says this research demonstrates its commitment to understanding its products. But when that research shows the product causes harm, then what?
"Meta researchers found that teens who report that Instagram regularly made them feel bad about their bodies saw significantly more “eating disorder adjacent content” than those who did not, according to an internal document reviewed by Reuters," per a report today from Jeff Horwitz:
Exclusive: Instagram shows more ‘eating disorder adjacent’ content to vulnerable teens, internal Meta research shows
Meta researchers found that teens who report that Instagram regularly made them feel bad about their bodies saw significantly more “eating disorder adjacent content” than those who did not, according to an internal document reviewed by Reuters.
www.reuters.com
October 21, 2025 at 6:28 PM
When you talk to real tech workers instead of billionaire CEOs, they hold one common opinion about AI. 👇

We need to make it easier - and normal - for people who work in tech to be able to share their takes. A working climate that chills free speech at the peril of job loss doesn't serve anyone.
October 20, 2025 at 7:27 PM
Well, let's face it, everyone hates dating apps, maybe we just go back to meeting people in bars! Truly scary erosions of privacy.
We found websites that use facial recognition to let partners, stalkers, or anyone else uncover specific peoples’ Tinder profiles, reveal their approximate physical location at points in time, and track changes to their profile including their photos.

🔗 www.404media.co/viral-cheate...
Viral ‘Cheater Buster’ Sites Use Facial Recognition to Let Anyone Reveal Peoples’ Tinder Profiles
Videos demoing one of the sites have repeatedly gone viral on TikTok and other platforms recently. 404 Media verified they can locate specific peoples' Tinder profiles using their photo, and found tha...
www.404media.co
October 17, 2025 at 2:23 PM