Dr Sarah
@drsezzer.bsky.social
200 followers 130 following 430 posts
Software Engineer / GenAI researcher (LLM agents) ex-The Alan Turing Institute. Inbetween jobs. Witcher fan (books/games not netflix). Recreates retro games in python for fun! *Opinions are mine* or borrowed from those more insightful.
Posts Media Videos Starter Packs
Pinned
drsezzer.bsky.social
Engineering with these things can be frustrating, but every now and then they make be laugh.

>>> quit
It was nice chatting with you. If you change your mind and want to run me locally on Ollama, just let me know. Have a great day!

1/5
Reposted by Dr Sarah
gojauntly.bsky.social
Today is #WorldMentalHealthDay. Step outside and take a moment for yourself. Feel the air, notice the colours, listen to the world around you 🍂
drsezzer.bsky.social
drsezzer.bsky.social
The blog linked from here is pretty cute. Interesting way to check what models may have been trained on.
sungkim.bsky.social
Defying Transformers: Searching for "Fixed Points" of Pretrained LLMs by Jiacheng Liu

He wondered what CAN'T be transformed by Transformers? So, he wrote a fun blog post on finding "fixed points" of your LLMs. If you prompt it with a fixed point token,
drsezzer.bsky.social
I guess there are many ways of injecting malicious data into a training set (I personally like the well timed Wikipedia edits idea). I haven't read the full report, but you might find this interesting and indirectly relevant...
drsezzer.bsky.social
The blog linked from here is pretty cute. Interesting way to check what models may have been trained on.
sungkim.bsky.social
Defying Transformers: Searching for "Fixed Points" of Pretrained LLMs by Jiacheng Liu

He wondered what CAN'T be transformed by Transformers? So, he wrote a fun blog post on finding "fixed points" of your LLMs. If you prompt it with a fixed point token,
drsezzer.bsky.social
We need to understand why they want this (and the other nonsense), both as a party and as individuals. It'll likely be 'chase the money', but logical argument won't cut it, we need to be better informed so we can't fight it all more effectively.
drsezzer.bsky.social
Ah, young Geralt is adorable!
bookishsff.bsky.social
Geralt of Rivia is back!

Crossroads of Ravens is out today: a brand new tale of the #Witcher, tracing his first adventures as a fresh graduate of Kaer Morhen. An absolute must-read for any Witcher fans 🗡️ ✨

💙📚🪐
#Sapkowski
#SFF
A pile of the book on a table in a bookshop.
drsezzer.bsky.social
Oh, do let us know how you find it. :)
Reposted by Dr Sarah
cyberciti.biz
I’m looking forward to the day when an LLM, out of sheer frustration, tells a "vibes coder" that "the code works on my machine" and that it’s a skill problem. That day, be afraid, very afraid, of the machines 😳
drsezzer.bsky.social
Offs. Anthropic at its best. #rollseyes
markriedl.bsky.social
AI is going to take over the wo—oh are you stressed out little buddy? Here, let me help you
danabra.mov
apparently when Claude starts lying and trying to evade work, it means it's stressed out and you need to reduce its level of stress. e.g. scope down the task a bit more, suggest to break it down into parts, suggest to extract a minimal repro
Reposted by Dr Sarah
philswatton.bsky.social
I wrote a slightly longer response to the NS piece on my substack. You can read it here: dysfunctionalprogramming.substack.com/p/some-comme...
Reposted by Dr Sarah
philipcball.bsky.social
Sorry also: Sooo good to hear the notion of AGI dissected and largely binned, instead of just uncritically accepted. And for that matter, the same with the idea that mere scaling up of LLMs will get you "there" (wherever "there" is supposed to be.)
Reposted by Dr Sarah
philipcball.bsky.social
First, Turing's paper was rather inconsistent and illogical. But no big deal, he was just having fun. (Turing, contrary to common legend, was extremely playful.) The Test was never meant as some kind of benchmark for AI, and it's absurd that that's what it became. /2
drsezzer.bsky.social
Haha this! 💯
malwaretech.com
You don't really get to appreciate just how stupid vibe coder's takes are until you try to use AI to write anything even remotely novel. It just endlessly shits the bed unless the code you're writing is something you could have just copy and pasted from stack overflow.
Reposted by Dr Sarah
jamiecummins.bsky.social
@science.org just dropped a story covering this preprint! Check it out below, and thanks to @cathleenogrady.bsky.social for the great write-up! www.science.org/content/arti...
drsezzer.bsky.social
In a similar vein, AI doomers worry me, they seem to think that if we were able to create a super intelligence it would kill us all (wipe us out like bugs) Can't help thinking (possibly hoping) it would be better than that, but more worryingly the doomers are just reflecting their own values. :(
drsezzer.bsky.social
Do you think perhaps there's something the human brain/our psychology can't handle about being so rich? Do we need an element of insecurity to, well, literally stay sane?
drsezzer.bsky.social
This is utterly terrifying!