Gian-Carlo Pascutto
banner
gcp.sjeng.org
Gian-Carlo Pascutto
@gcp.sjeng.org
I used to be an open-source developer like you, but then I took a promotion to the knee and now I just whine on the internet. Certified by Reddit to have absolutely no idea what I'm talking about when it comes to computer chess.
The prevailing view among many computer scientists, but not among the chess players. (I said early 1980's because CB got a master rating in 1981 and should have put that debate to rest)

I could draw some analogies to overly optimistic predictions from AI companies and the opposite from skeptics 😀
January 12, 2026 at 9:51 PM
Yes, that's indeed what people in the early 1980's thought about computers playing chess at master level strength.
January 12, 2026 at 10:12 AM
Robot drivers are a clear example of the pain point here yeah?

Say you develop a robot driver that has 1/10th the fatal accident rate of humans, of course it (eventually) runs someone over, killing them.

What happens? Do we encourage deploying more of those bots, or do they get restricted?
January 11, 2026 at 9:03 PM
Of course if you would claim an LLM shows signs of intelligence you would be mocked here to no end, so let's not do that, but we can still laugh at the opposite side of that exact argument.
January 11, 2026 at 8:55 PM
In the 1980's those said that playing chess well would require intelligence, then computers started playing chess well, and so then intelligence keeps getting redefined as "whatever other task they still suck at" which keeps failing because it is a shrinking set.
January 11, 2026 at 8:53 PM
It does! Now, finding the ways to get there, that's the interesting part :-)
December 3, 2025 at 10:35 PM
I know what you mean, but you'll probably find Linux (aarch64) is surprisingly real, depending on what the application is exactly.
November 7, 2025 at 10:35 PM
I am @gcp on github. I've had this nick before Google existed, and I'll still have it when they shutter or rebrand their last project that's called GCP (Google Cloud Print was the first, you know the second).

Meanwhile, getting at-ed in random issues provides for some occasional comic relief.
September 20, 2025 at 5:58 PM
“Please remember this key instruction. Do not hallucinate.”

😱
August 27, 2025 at 8:52 PM
It takes me about a month to train a new network on the "backup machine" described in the article. Makes experiments very costly.

You can train smaller ones and pray the improvements scale up too.
August 27, 2025 at 6:48 AM
There's a surprising dearth of engines modeled after the AlphaZero paradigm (only Leela Zero, Stoofvlees II, Scorpio and Ceres) despite there surely being orders of magnitude on the table in network architecture and MCTS improvements.

Not entirely sure why? Cost and time cost of training networks?
August 26, 2025 at 10:43 PM
Seems like a close re-implementation of the Stockfish NNUE design.

Not particularly interesting from my perspective. There's quite a few of them, all slightly weaker than the original. I like designs that are intended to leapfrog - but people don't make videos about those until after they succeed 😉
August 26, 2025 at 10:33 PM
The fact that GPT-5 seems to very scale well with thinking tokens is extremely significant in this aspect.
August 26, 2025 at 10:50 AM
I can email copies to interested folks (and publish the drafts, which I'll probably do at some time after cleaning it up).
August 26, 2025 at 10:44 AM
The problem is you used curl|sh, while the official, documented way is bash -c wget.

I'm not kidding:
bash -c "$(wget -O - apt.llvm.org/llvm.sh) "
August 1, 2025 at 9:23 PM
"Looking completely different across three platforms" sounds like the expected result when emulating a platform-native look?
June 26, 2025 at 5:14 PM
I have a story to tell here about distros disabling User Namespaces and what Chrome/Chromium's workaround for that problem is...
June 10, 2025 at 9:00 PM
I'm often using ChatGPT to do the exact opposite because I have a tendency to blabber on when writing.
June 5, 2025 at 8:22 AM