jessica dai
banner
jessica.bsky.social
jessica dai
@jessica.bsky.social
go bears!!!

jessicad.ai
kernelmag.io
congrats!!!
September 9, 2025 at 10:48 PM
so close! that's standard error ❤️
August 6, 2025 at 5:12 PM
i'm still pissed about this like the difference is literally too small to have been distinguishable with swe bench (500 samples) lmaoooo
August 6, 2025 at 3:54 AM
I will be at ICML in a few weeks & would love to chat about how to make this real - I am a critic at heart and also hate self-promo so that’s how you know I really believe in this 🥲
July 1, 2025 at 11:39 PM
various ways to read more 😀

blog post- argmin.net/p/individual...
position paper- arxiv.org/abs/2506.18133
fairness-oriented instantiation- arxiv.org/abs/2502.08166

& many thanks to brilliant collaborators
@rajiinio.bsky.social @irenetrampoline.bsky.social @beenwrekt.bsky.social & paula gradu !!
argmin.net
July 1, 2025 at 11:39 PM
lots of other stuff I won’t get into rn (e.g., I think this is a prereq to any serious attempt at “democratic” AI!), and there’s also a ton of open research questions (stats, econ/ml, empirical methods, hci, …)
July 1, 2025 at 11:38 PM
the core concept is individual reporting as a means to build collective knowledge. if one person has a bad experience, that doesn’t necessarily mean that there’s something wrong with the system — but if lots of people start reporting similar things, maybe we should pay attention.
July 1, 2025 at 11:38 PM
we’ve already seen this informally with the chatgpt sycophancy debacle — a few days of twitter virality resulted in action and statements from openai — but what other, subtler, patterns are happening? what could we discover if we had better ways to listen to the public?
July 1, 2025 at 11:38 PM
right but one would hope that the date of doom _does_ get further away as safety research improves

bsky.app/profile/jess...
like is it that the field has been ineffective (studied the wrong problems, advocated for the wrong positions, etc) or is it that every step of safety progress has been matched by 2 steps of capabilities progress (in which case, what are the best examples of safety work concretely reducing harm?)
May 8, 2025 at 9:25 PM
well probably, but i wanna know how folks who do believe in that happening think about the field
April 19, 2025 at 2:09 AM
or is it a secret third thing idk. scared to ask this on Real Twitter but genuinely curious how people think about the role of this field
April 18, 2025 at 5:45 AM
like is it that the field has been ineffective (studied the wrong problems, advocated for the wrong positions, etc) or is it that every step of safety progress has been matched by 2 steps of capabilities progress (in which case, what are the best examples of safety work concretely reducing harm?)
April 18, 2025 at 5:44 AM
back on bluesky to be mean about ai discourse
February 10, 2025 at 6:04 PM
... didn't we just talk about this ...
December 16, 2024 at 11:47 PM
ill read it
December 14, 2024 at 6:23 PM