Jean Czerlinski Ortega
jeanimal.bsky.social
Jean Czerlinski Ortega
@jeanimal.bsky.social
Sometimes Google engineer modeling things and celebrating non-things: machine learning, incentives, behavior, ethics, physics.

Former member of Gigerenzer's Adaptive Behavior and Cognition group.
7/
If you're in a domain where the cheaters move faster than the rulebook, "Hindsight Accountability" can help. Read more:
👉https://medium.com/@jeanimal/hindsight-accountability-deterring-the-gaming-of-regulations-2ccdc800db09
#Cybersecurity #AIRegulation #Incentives #Governance #PolicyDesign
May 21, 2025 at 9:33 AM
6/
This isn’t just a technical fix.
It’s a philosophical shift in regulation:
We don’t need to anticipate every trick—we just need to track evidence well enough to figure out the tricks later. Gamers will be deterred.
May 21, 2025 at 9:33 AM
5/
🔐 Cybersecurity already uses these ideas.
Firms track malware reports, identify new patterns over time, and retroactively patch their defenses.
Some regulations now require these tracking systems.
It’s hindsight, made actionable.
May 21, 2025 at 9:33 AM
4/
🏦 In banking, clawbacks let firms reclaim bonuses for deals that later go bad.
Even if the deal-makers snuck in a bad deal at the time, long-term performance still matters.
That changes how people play the game.
May 21, 2025 at 9:33 AM
3/
🏅 Sports agencies now freeze athletes’ biological samples for 10 years.
When new drug tests emerge, they re-analyze.
And sometimes strip medals retroactively.
It’s not just punishment—it’s deterrence.
Cheaters know the past can catch up.
May 21, 2025 at 9:33 AM
2/
Regulators leverage hindsight accountability when they:

1. Store evidence
2. Reanalyze the evidence with better tools & context later-- to catch people gaming the rules
3. Apply retroactive consequences

It’s no silver bullet. But it can deter people from gaming in the first place.
May 21, 2025 at 9:33 AM
March 2, 2025 at 4:36 PM
Solving N equations in N unknowns is analogous to the interpolation threshold. Since there is exactly one solution, it has to fit any noise in the data. These are the shackles. Having fewer or more unknown parameters gives freedom to avoid overfitting.

4/4
October 20, 2024 at 3:25 PM
The spike in error happens at the interpolation threshold when the number of parameters in the model (same as number of columns for my regression) equals the number of examples (rows). Double descent follows.

3/4
October 20, 2024 at 3:25 PM
I create double descent with a few lines of sklearn code. I fit linear regression on data sampled with different “parameterization ratios,” (# examples / # parameters), allowing me to control exactly where the interpolation threshold causes the error spike before descent.

2/4
October 20, 2024 at 3:24 PM