Jordan Costello
jordancostello.bsky.social
Jordan Costello
@jordancostello.bsky.social
When we didn't give documentation, context, or intent to people: things were awful. When we give those things to AI: production popped off. AI is amazing
another irony that attaches to this one: the systems you build atop “AI” are in many cases composed of piles of semi structured text files, which is to say: process documentation and other knowledge artifacts — objects that carry value for humans but are often under valued
November 15, 2025 at 11:37 PM
Reposted by Jordan Costello
friday soapbox: as a designer, if you want something to be a skillful action for players, and you want growth in this skill to be very deep/meaningful, then you need to also let players be bad at it. sometimes horrendously bad. players hate this pain, and designers often want to mitigate it away.
November 14, 2025 at 8:42 PM
I think sometimes that attitude is used to offload responsibility on to someone else, with the justification that they're "smart," so that makes it okay.

On the other hand, it's fine to be impressed with someone grasping something that we don't. But also-also everyone has something like that.
the amount of people who will say something like "they went to tech so you know they're smart"

YALL

math science engineering is a certain kind of skill that some ppl study but lolll plz do not defer to stem folks on all the things, just on stem things
November 10, 2025 at 11:32 PM
Reposted by Jordan Costello
Objectivity, impartiality and balance are all *different things*, and the lazy tendency to treat them as synonyms, and to use partisan balance alone as a proxy for the others, is the root cause of a vast amount of nonsense.
Robbie Gibb once suggested that reporters should reflect if they were getting more retweets from one side than the other - a braindead analysis that ignores that fair and impartial reporting of education might get more Tory retweets than say, criminal justice.
Stephen really does have the best take on this. It’s not clear that the BBC Board or indeed the rest of the News team really understood the message of the previous reviews, which were about getting detail right. Instead they wanted to know what was ‘biased’ or not like they were blotting out stains.
November 10, 2025 at 10:54 AM
Reposted by Jordan Costello
Grumpy Old Man Strikes Back
November 10, 2025 at 10:46 AM
Reposted by Jordan Costello
I have this take that I'm struggling to put into words, but I believe when possible, you should prefer maps to conditionals.

There was a talk by Sandi Metz years ago, I'll try and find it, but she says, "I'm condition averse, I want to pass messages to objects" and I think she's right.
November 10, 2025 at 6:09 PM
10x liability is still 10x I guess 💪
10x engineer, but doesn’t write unit tests
November 10, 2025 at 5:38 PM
Reposted by Jordan Costello
Software companies already don't listen to QA, it's one of the reasons why everything is getting worse

Using AI to do the majority of testing will just make ignoring bugs even easier

www.videogameschronicle.com/news/square-...
Square Enix says it wants generative AI to be doing 70% of its QA and debugging by the end of 2027 | VGC
The publisher is researching “Game QA Automation Technology” with the University of Tokyo…
www.videogameschronicle.com
November 6, 2025 at 4:08 PM
Reposted by Jordan Costello
"oooo i'd draw that but i can't draw"

brother get this through your head

NOBODY can draw

we are literally

ALL

BULLSHITTING
November 4, 2025 at 8:39 PM
Reposted by Jordan Costello
What the heck is a trampoline, anyway?

The blog post is now live! Come one, come all - enjoy this deep dive that commemorates going down the compiler rabbit hole (twice! in the Paris airport!)

savannah.dev/posts/what-t...
November 5, 2025 at 5:40 AM
Going to be thinking about this thread for a while
Something doll has been thinking about. Many people mistake beliefs for knowledge. Its a very human and common cognitive bias. People tend to trust the people around them to tell them both honest and accurate information, who "know" what they "know" because they trusted someone else etc.
October 30, 2025 at 7:33 PM
Reposted by Jordan Costello
You can get so much more done*
---

*things people will have to spend 10x as much time redoing as you spent making them
October 27, 2025 at 5:46 PM
Reposted by Jordan Costello
The Limits of LLM-Generated Unit Tests

Developers often ask LLMs like OpenAI Codex to write tests for their code - and they do. The tests compile, run, and even pass, giving a sense of confidence. But how much can we really trust those tests? If an LLM only sees the source…
#hackernews #llm #openai
The Limits of LLM-Generated Unit Tests
Developers often ask LLMs like OpenAI Codex to write tests for their code - and they do. The tests compile, run, and even pass, giving a sense of confidence. But how much can we really trust those tests? If an LLM only sees the source code and a few comments, does it truly understand what the software is supposed to do?
hackernoon.com
October 25, 2025 at 6:43 PM
Reposted by Jordan Costello
Absolutely wild stuff!

Debugging and unit-testing WebGL2 compute shaders.

The depth of abstraction these things reach is PROPER.

github.com/oyin-bo/thre...
October 26, 2025 at 1:22 AM
Bliss; Conned-Descension

On everything
I know nothing
In reality
I know nothing
Of the cosmos
I know nothing
Of the world
I know nothing
Of my land
I know nothing
Of all people
I know nothing
Of my people
I know nothing
Of myself
I know nothing
Of my mind
I know nothing
And in my mind
I know it all
October 25, 2025 at 5:26 PM
Is an oracle obliged to fix the future if they lack the power to influence the gods?

The gods would say: yes
so the work project my boss built with AI collapsed pretty much exactly as I said it would two months back and somehow this is my fault
October 25, 2025 at 3:23 AM
In these cases I wonder if it evens out on catching bugs in good tests that weren't written before vs missing bugs poorly reviewing wrong tests.

Or if any teams will ignore the test results and feel accomplished with tests existing (80% is still a good grade, right folks?)
I mean... there are lots of dev teams where no unit tests are the other option so I can see these being shoe-horned in to epic results down the road 🫢
October 24, 2025 at 3:17 AM
I'm wondering to what extent (AI) psychosis is a statistical linguistic outcome of a person (or LLM) talking to themselves for too long without accepting a context of external, challenging feedback
October 22, 2025 at 3:02 PM
Reposted by Jordan Costello
Top cause of adult hearing loss is having headphones on when a podcast's Shopify ad ends.
October 21, 2025 at 11:51 PM
Reposted by Jordan Costello
Been seeing Sora videos on socials with the watermarks removed. The automated removal tools aren't perfect and leave artifacts so it's worth remembering the Sora watermark pattern (For portrait videos): top-left, middle-right, bottom-left. Watch for alternating artifacts in those regions.
October 22, 2025 at 10:33 AM
Playing against a CPU? AI. An NPC looks at player? AI. Path finding? Any algorithm? An if statement? Computer vision? Deep learning? LLMs? Diffusion models? All AI. Every 10 years the goalposts move. In a way, that disturbs me.
"AI" as a term is already crap and the fact they are using it for completely different technologies adds more to the confusion

The gen ai plagiarism machine is clearly not the same technology as the one mentioned here, nor is it the same technology as the "ai" used by file or photo a[...]
See complete post at app.wafrn.net
app.wafrn.net is a Wafrn server. Wafrn is a federated social media inspired by Tumblr, join us and have fun!
app.wafrn.net
October 22, 2025 at 3:01 AM
In a home without doors, the needy cat is king 💀☕️
May 7, 2025 at 11:59 AM
Reposted by Jordan Costello
Early in my career, I learned that unit tests should be based on a spec, not the implementation details of the system under test. Now I see people using #AI to generate tests based on the existing implementation when a spec is absent. This new trend worries me.
May 3, 2025 at 4:10 AM
Reposted by Jordan Costello
Being able to track engagement over time for individual posts is a game-changer for analytics on Bluesky
May 1, 2025 at 9:57 AM
Sounds like an AI-enhanced XY problem 🚀

AI-XY
LLMs are subtly bad at a wide variety of things, but one of the worst is detecting when a user is asking the wrong question.

Recently a junior engineer came to me with a problem he was having writing unit tests. He'd spent three days trying to get help from multiple different LLMs...
April 24, 2025 at 2:19 AM