Petra Mifka
banner
petramifka.bsky.social
Petra Mifka
@petramifka.bsky.social
Global Trust and Safety Expert | Quality & Wellbeing Advocate | Building a Safer Digital Future Human-Centered Leadership
Pinned
X's AI chatbot, #Grok, has generated thousands of non-consensual "undressing" images per hour. Mostly of women. Some involving teens.

This is exactly what happens when:

• powerful gen tools ship fast
• safeguards lag behind
• and abuse reporting becomes reactive instead of preventative
This is definitely true 😅

The younger you are the more you try to outrun or hide your weirdness. The great thing about age is you stop caring about these little things and realise you're living in it.
February 13, 2026 at 8:00 AM
A marketplace where people openly commission #deepfakes of real women... it's chilling how ordinary this abuse has become.

When platforms choose minimal intervention in the name of “creativity,” the cost is paid almost entirely by women, usually without recourse, visibility, or support.
Inside the marketplace powering bespoke AI deepfakes of real women
New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned.
www.technologyreview.com
February 12, 2026 at 8:38 AM
Kids don’t fall into harm through ideology. They fall in through belonging.
Gaming. Memes. Soft communities until the rules change. That slow escalation is what makes this so hard to catch.

Are there early signals that platforms are still missing?
The Com: Gamifying harm
Sky News investigates sadistic online groups where children trade harmful content for status
news.sky.com
February 11, 2026 at 9:02 AM
#Bluesky's 2025 #Transparency report shows misleading content and harassment make up the bulk of what users flag - not the extreme edge cases people often assume dominate Trust & Safety work. Violence, child safety, and self-harm are critical, but they’re a small fraction of overall volume.👇
February 10, 2026 at 9:22 AM
The argument that "moderation at scale is technically impossible" is getting... thing.

This piece lays it out clearly: the hardest part was never scale - it was policy interpretation. And now, even that bottleneck is shifting.

What guardrails should platforms deploy right now?
AI is Removing Bottlenecks to Effective Content Moderation at Scale
Zentropi's Dave Willner says LLM-driven technology can now accomplish content classification at the scale necessary for moderation on large platforms.
www.techpolicy.press
February 9, 2026 at 7:21 AM
Honestly feels like the only sustainable career strategy right now.

Especially in tech/Trust & Safety/if you plan to stay curious longer than your job title.

#reskilling #learning
February 6, 2026 at 8:03 AM
Leaving the house before kids vs after kids is… not the same activity.

I used to grab my keys and go. Now it’s keys, snacks, wipes, emotional negotiations, and one missing shoe.

Growth ✨
February 5, 2026 at 11:01 AM
📢Thousands of UK children will take part in a study which will explore the impact of restricting #socialmedia on #mentalhealth, sleep and time spent with friends and family.

Having spent years in Trust & Safety, I welcome this shift. We need more #data on children's #onlinesafety.
February 4, 2026 at 11:00 AM
⚠️Women are being secretly filmed using smart glasses, posted online, and then harassed, often with NO clear legal protection.

It’s exploitation at scale, powered by wearable tech + engagement incentives.

If platforms profit from this content, how proactive should their responsibility be?
Women filmed in secret for social media content - then harassed online
So-called manfluencers wearing smart glasses approach women and then post videos to TikTok and Instagram.
www.bbc.co.uk
February 3, 2026 at 9:03 AM
Some people separate work and life. Others blend it completely.

I don't think either is better... but pretending everyone works the same way is how teams burn out quietly!

👉 Are you a segmentor or an integrator, and does your workplace actually support that?
January 30, 2026 at 9:28 AM
🤓I'm going back to school! (kind of)

T&S without #AI doesn't really work at scale anymore. And AI without strong T&S thinking... doesn't work responsibly.

I’ve spent years working around these systems, so it felt like the right moment to better understand how they’re actually built. #IronHack
January 29, 2026 at 9:37 AM
Lately, I’ve been reflecting on a theme that’s followed me through most of my career: #ImposterSyndrome.

What I’ve learned over time is that imposter syndrome doesn’t mean you’re unqualified. Often, it just means you’re growing faster than your comfort zone.
January 28, 2026 at 8:52 AM
Independent disinformation research is being framed as censorship. That’s quite a dangerous misunderstanding.

IMO, providing advertisers with data about content risk isn’t speech control - it’s market transparency. Brands choosing where not to place ads is a free-market decision, not coercion.
January 27, 2026 at 7:00 AM
Could the UK end up banning X? It's legally possible, but only as a last resort.

This is what happens when risk assessment and safeguards fail persistently...

👉 Do you think platform bans should ever be on the table, or do fines and corrective actions go further?
Can X be banned under UK law and what are the other options?
UK media regulator is investigating whether X has breached the Online Safety Act – what could happen next?
www.theguardian.com
January 26, 2026 at 6:25 AM
UK #AI Security Institute: even top models can be misused and exploited.

“Trusted vendor” ≠ safe system.
Capability doesn’t equal control.

Deployment without deep testing is a risk, not innovation.

👉 Should governments slow-roll AI launches?
Inside the UK's AI Security Institute - Raconteur
The organisation has found that common AI models can be exploited by cybercriminals, raising fresh concerns for industry and government
www.raconteur.net
January 23, 2026 at 7:03 AM
Speaking from personal experience, I’ve seen (and made) this mistake: avoiding a hard conversation because you don’t want to be “that manager.”

Kindness isn't comfort! It's clarity, honesty, and setting expectations early.

🙌You can be empathetic and firm.
🙌You can care deeply and hold the line.
January 21, 2026 at 6:56 AM
Some creators won big #AI copyright cases in 2025, but that window may be closing fast.

As models shift to synthetic data, leverage disappears. No agreement soon could mean no compensation at all. This is uncomfortable, but it's real.

How do you think creators should respond right now?
‘Clock Is Ticking’ For Creators On AI Content Copyright Claims, Experts Warn
While content owners made progress with legal claims against AI companies in 2025, new training methods may reduce the need for human-made data.
www.forbes.com
January 20, 2026 at 8:00 AM
Cyber flashing is now a priority offence in the UK. That's huge.

Platforms are expected to prevent, not just react - risk assess, design safeguards in, and stop harm before it happens.

Tech can do this. Now it has to.

👉 What changes do you want to see first?

Via Sky News
January 19, 2026 at 10:02 AM
X's AI chatbot, #Grok, has generated thousands of non-consensual "undressing" images per hour. Mostly of women. Some involving teens.

This is exactly what happens when:

• powerful gen tools ship fast
• safeguards lag behind
• and abuse reporting becomes reactive instead of preventative
January 17, 2026 at 7:57 AM
Experts say the fix to 'digital clutter' isn't turning everything off, but being selective.

Cull unused apps, silence non-essential notifications, and give your brain micro-breaks away from screens. Little wins add up. 🧠

📌 Do you have any digital clutter habits you’re finally ditching in 2026?
January 16, 2026 at 8:13 AM
I honestly feel this every day.

The thing with micromanaging is that it might make you feel in control, but it kills initiative. Even from personal experience, when I've been trusted to do my job, I feel more creative, work quicker, and this eventually leads to real results.
January 15, 2026 at 3:22 PM
Most people stick with the first career they stumbled into even if it’s not a fit. But research shows: skills, opportunities, and the world change fast. What felt right at 22 may not at 38.

Good advice here: don't wait for #burnout to pivot. Track what actually energises you, not what's convenient.
https://www.fastcompany.com/91462109/how-tell-time-career-pivot
www.fastcompany.com
January 14, 2026 at 8:17 AM
Interesting but unsurprising research from Tremau's T&S Pulse Check.

✌️On the plus side, collaboration worked last year. Teams leaned on each other instead of just following playbooks.

The stress points for 2026 > #AI risks, reputational pressure, proving impact, tiny teams, tight budgets.
January 13, 2026 at 8:07 AM
This is what tech in the wrong hands looks like... extremists using #AI to clone voices and supercharge propaganda.

We can’t just say “tech is neutral.” Platforms, regulators, and experts need to get ahead before this gets normalised.

💬 How should #BigTech respond to AI tools being weaponised?
Extremists are using AI voice cloning to supercharge propaganda. Experts say it’s helping them grow
Researchers warn generative tools are helping militant groups from neo-Nazis to the Islamic State spread ideology
www.theguardian.com
January 12, 2026 at 9:35 AM
‼️Pay gets attention. Culture earns loyalty.
One is transactional. The other is why people stay when things get hard.
January 9, 2026 at 9:13 AM