Alexander Schmidt-Lebuhn
banner
anschmidtlebuhn.bsky.social
Alexander Schmidt-Lebuhn
@anschmidtlebuhn.bsky.social
Botanist, taxonomist, phylogeneticist.
At least the Singularitarians, which are many of the execs, believe that it is thinking. That's it. Field work, experimental greenhouses, laboratories, radiotelescopes, microscopes, or particle colliders are decoration. Therefore, AI only needs to think faster and smarter, then science comes out.
November 25, 2025 at 9:38 PM
It is just another cult, isn't it?

Never ask if users want genAI in this service.

Never ask if the problem was already solved perfectly well without it.

Never ask if it serves a purpose.

Never even ask if it will generate a profit.

Just add AI to everything. We must. Why? Because!
November 25, 2025 at 9:35 PM
What they think the story is:

No social safety net makes us take risks, driving innovation. Also, we are geniuses.

What the story is:

No social safety net makes us desperate enough to commit fraud. Also, our idea was so obvious that it is now realised (badly) by numerous service providers.
November 25, 2025 at 8:52 PM
Second and fourth place!
November 25, 2025 at 6:30 AM
Or wait, is this about xyz versus xyz empire, as in Ireland was part of the British Empire but not part of England? But for what it is worth, statehood of Purto Rico can't be the criterion, otherwise DC wouldn't be part of the USA.
November 24, 2025 at 11:29 PM
Easy to check: Are they independent?
November 24, 2025 at 11:13 PM
The Qing emperor was the emperor of China, just like the Mongol emperors before them. Again, this doesn't mean Taiwan should be part of China any more than that Spain should become part of the Roman Empire again, but that was simply China at the time. Did Europeans generally call the country Manchu?
November 24, 2025 at 10:36 PM
Most people don't understand what incentive structures are and why they matter - part 289.
November 24, 2025 at 9:32 PM
IMO a more realistic solution is to have more desk rejections. I believe that a major problem is many editors not doing the job. They should read the paper and only send it out for review if they already think it looks acceptable in principle. That means reviewers are less overwhelmed by nonsense.
November 24, 2025 at 8:55 PM
Also, I am paid to review; it is my salary as a scientist!

If peer review was turned into a gig economy, my employer would collect the payment as a consultancy fee for *its* (not my!) services to the publisher, creating work for contract managers on both sides far exceeding the value of the fee.
November 24, 2025 at 8:55 PM
Sorry, I am confused. Wasn't Taiwan part of the Chinese empire between 1683 and 1895? I am not at all saying that this has any implications for today, but that is what historically happened, right?
November 24, 2025 at 8:11 PM
That has also been my experience. A colleague and I tried an LLM-based system meant for research. In her case, she said references were hallucinated, so that problem is very much not solved either. But in my case, they existed but mostly did not support the statements they were cited in support of.
November 23, 2025 at 11:13 PM
Last time I tried an LLM to write a little script for me, it went through four iterations until it worked; all three early attempts ran without error, they just didn't do the correct conversion. "Runs" isn't enough. It is like being happy that the engine runs while driving the car into a wall.
November 23, 2025 at 11:10 PM
I would like to take a step back here and question the logic of having highlights, though. That isn't an LLM garbling the points, it is written by the authors, but still, what are highlights if not another, earlier attempt at reinventing the abstract we already had?
November 23, 2025 at 12:46 PM
Ah, that takes me back to when the Bitcoin cultists said that blockchains actually make electricity cheaper because the massive unnecessary waste will force electricity suppliers to innovate and some such mental gymnastics.
November 23, 2025 at 7:50 AM
Even one of the extremely stupid AI ads I get on Youtube starts with two skeptical colleagues saying "another AI integration? Let's hope this one works".

(Spoiler alert)

The evidence that this product will totally work reliably is that a purple animated Llama starts singing a pop song.
November 23, 2025 at 7:47 AM
As others have written, if LLMs are so useful and popular, why does every software provider have to either force it on users or sneak it in where they don't notice?
November 22, 2025 at 11:27 PM