Pierre-Marcel De Mussac
banner
pierremarceldm.com
Pierre-Marcel De Mussac
@pierremarceldm.com
AI Alchemist, Engineer & Researcher

“Wisdom is welcome wherever it comes from.”
This is humans building extractive systems, then being shocked that living in these systems harms us.

Also: This frames a Western world problem as a universal human issue. Billions of people aren't living like this.

The issue isn't that we're evolving too slowly. It's that we're choosing badly.
November 23, 2025 at 2:14 PM
Their solution? "Treat nature as a key health factor."

Great idea. But that requires NOT DESTROYING IT for short-term profit. That's a choice problem, not an evolution problem.

This isn't biology failing to adapt to an environment that just happened to us.
November 23, 2025 at 2:14 PM
"Lion after lion", constant stressors with no recovery. Noise pollution, artificial light, processed foods, sensory overload.

Yeah. We CHOSE to build that way. We could have cities with green space, clean air, walkable design. We don't because those weren't optimized for quarterly returns.
November 23, 2025 at 2:14 PM
The study talks about chronic stress, declining fertility, inflammatory conditions as signs our biology can't keep up with modernity.

But here's what they're actually describing: We built cities optimized for profit rather than human wellbeing, then act surprised when humans don't flourish in them.
November 23, 2025 at 2:14 PM
Same pattern for 34 years: Give kids unrestricted access → problems emerge → blame technology → demand regulation → ignore parental responsibility.

Maybe teach your kids boundaries?

Radical idea, I know.
November 20, 2025 at 2:39 PM
The solution isn't banning AI toys.

It's parents actually parenting their kids instead of outsourcing childhood to whatever screen will keep them quiet.
November 20, 2025 at 2:39 PM
"Toys will tell kids where to find matches!" - Kids have been finding matches since matches existed.

If your kid is asking AI where the dangerous things are, supervision already failed.
November 20, 2025 at 2:39 PM
Here's the actual problem: Parents giving 2-year-olds internet-connected chatbots, then being shocked when unsupervised kids encounter inappropriate content.

That's not a technology failure. That's a parenting failure.
November 20, 2025 at 2:39 PM
This is bubble-era thinking: optimize for market perception and corporate demands, weaken what was actually working, then call it "innovation".

Remember this when the sorting happens.

The ones who maintained principles Vs. the ones who caved under pressure.
November 19, 2025 at 5:32 PM
The EU built its reputation on being the jurisdiction with actual tech regulation and data protection.

Now they're weakening it under competitive pressure.
November 19, 2025 at 5:32 PM
Who's pushing back:

- European Digital Rights: "massive rollback of EU digital protections"
- Former EU commissioner Thierry Breton: "resist attempts to unravel digital rulebook under pretext of simplification"
November 19, 2025 at 5:32 PM
Who's pushing:

- Mario Draghi's report about falling behind US/China
- Trump administration pressure (explicitly mentioned)
- Big tech via CCIA (Amazon, Apple, Google, Meta) saying changes don't go far enough
November 19, 2025 at 5:32 PM
What's changing:

- Reduced cookie consent requirements
- All framed as fighting "anti-innovation bias"
- Easier to use personal data for AI training without consent
- High-risk AI systems get 18 months longer before compliance
November 19, 2025 at 5:32 PM
The gap between intention and execution is wide. And students are the ones paying the price, literally, with six-figure degrees that may not prepare them for the market they're entering.

Private industry will fill this gap faster than universities can adapt, if we don't do anything about it.
November 18, 2025 at 7:08 PM
Jake Baskin (CSTA director, wrote that CS education piece) nails it: "From my vantage point in primary and secondary education, higher education institutions often appear resistant to change."
November 18, 2025 at 7:08 PM
"Universities remain tied to traditional models... a gap will open between what students learn and what employers expect."

Already happening.

Worse? Many educators aren't confident using AI themselves. Curriculum redesign is slow. AI moves fast.
November 18, 2025 at 7:08 PM
The problem isn't that industries are disappearing, it's that universities are still teaching technical execution when jobs are shifting to strategic oversight and AI management.
November 18, 2025 at 7:08 PM
Stanford research just showed a 16% drop in employment for workers aged 22-25 in AI-susceptible fields. Meanwhile, experienced workers stayed largely protected.

Translation: AI is hitting entry-level jobs hardest. And universities aren't preparing students for it.
November 18, 2025 at 7:08 PM
This perfectly captures the divide between:

People writing ABOUT AI from a distance
People actually USING AI and building with it

Post-bubble, this gap becomes obvious real fast.

(I can hear Barry Goldberg yelling "CHAT GTP" in my head and I can't stop laughing 😄)
November 16, 2025 at 5:25 PM
"Chat GTP can be useful, but shouldn't be blindly trusted" - advice from someone who apparently never once googled the name of the tool they're writing about.

You can't write a whole piece about AI literacy while repeatedly misspelling THE most prominent AI tool's name.
November 16, 2025 at 5:25 PM