Konrad Hinsen
@khinsen.net
760 followers 140 following 140 posts
Researcher at CNRS (France). Computational science, in particular computational biophysics. Metascience, in particular the evolution of science in the digital era. More active in the Fediverse: https://scholar.social/@khinsen
Posts Media Videos Starter Packs
Reposted by Konrad Hinsen
schoenenberger.bsky.social
Totally agree – we need peer review for research software. At least the “artisanal” stuff – those small, medium-size scripts, notebooks, workflows that drive much science. Reviewing them would make results clearer, more reliable, and way more trustworthy.

#science #openscience #opensource
khinsen.net
New publication: "Reviewing research software"

Unlike experimental or theoretical methods, software is almost never peer reviewed. Maybe this should change. But is it possible at all?

doi.org/10.1109/MCSE...

Preprint: hal.science/hal-05274018

🧪 #openscience #metascience
Reviewing Research Software
Every research project in computational science requires writing some code, even if it’s only a few scripts. This code is instrumental in generating results, and often important for understanding in d...
doi.org
Reposted by Konrad Hinsen
robin.berjon.com
This is not an exaggeration.

Everything — *everything* — is downstream of energy. Our technological prowess is downstream of the massive power subsidies we have been getting from fossil fuels.
mtsw.bsky.social
You're living through one of the biggest technological transformations in world history and it has nothing to do with AI
janrosenow.bsky.social
Grid scale batteries are changing our electricity system. Excellent new visual story on batteries in FT today shows just how far this technology has evolved.

Fasten your seatbelts, this is just the beginning.

ig.ft.com/mega-batteri...
Reposted by Konrad Hinsen
thorstn.bsky.social
«The input does not cause the output in an authorial sense, much like input to a library search engine does not cause relevant articles and books to be written (Guest, 2025). The respective authors wrote those, not the search query!» via @olivia.science via#2 @irisvanrooij.bsky.social - thank you
olivia.science
important on LLMs for academics:

1️⃣ LLMs are usefully seen as lossy content-addressable systems

2️⃣ we can't automatically detect plagiarism

3️⃣ LLMs automate plagiarism & paper mills

4️⃣ we must protect literature from pollution

5️⃣ LLM use is a CoI

6️⃣ prompts do not cause output in authorial sense
5 Ghostwriter in the Machine
A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that
essentially function as lossy2
content-addressable memory: when
input is given, the output generated by the model is text that
stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs
an automated version of what is known as mosaic or patchwork
plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This
makes the automated flagging of plagiarism unlikely, which is
also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be
seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van
Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics, Either way, even if
the courts decide in the favour of companies, we should not allow
these companies with vested interests to write our papers (Fisher
et al., 2025), or to filter what we include in our papers. Because
it is not the case that we only operate based on legal precedents,
but also on our own ethical values and scientific integrity codes
(ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to
protect, as with previous crises and in general, the literature from
pollution. In other words, the same issues as in previous sections
play out here, where essentially now every paper produced using
chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company
who owns the bot (see Table 2).
Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic
in this case, as the creator of the bot’s output is misplaced. The
input does not cause the output in an authorial sense, much like
input to a library search engine does not cause relevant articles
and books to be written (Guest, 2025). The respective authors
wrote those, not the search query!
khinsen.net
Yes, it's fuzzy. The only way to figure out how to review code appropriately is to start doing it!
khinsen.net
New publication: "Reviewing research software"

Unlike experimental or theoretical methods, software is almost never peer reviewed. Maybe this should change. But is it possible at all?

doi.org/10.1109/MCSE...

Preprint: hal.science/hal-05274018

🧪 #openscience #metascience
Reviewing Research Software
Every research project in computational science requires writing some code, even if it’s only a few scripts. This code is instrumental in generating results, and often important for understanding in d...
doi.org
Reposted by Konrad Hinsen
merriam-webster.com
We are thrilled to announce that our NEW Large Language Model will be released on 11.18.25.
Reposted by Konrad Hinsen
tomasp.net
I'm at #uist2025 presenting our new work with @jonathoda.bsky.social!

𝗗𝗲𝗻𝗶𝗰𝗲𝗸 is a computational substrate for end-user programming that makes it easy to implement programming experiences like programming by demonstration, collaborative editing and more!

tomasp.net/academic/pap...
khinsen.net
Found the solution via a hint on Mastodon. The culprit is ibus, installed as a dependency of Zoom.

To disable it without uninstalling:
sudo chmod 000 /usr/bin/ibus-daemon

Source: forums.linuxmint.com/viewtopic.ph...
forums.linuxmint.com
khinsen.net
Question to Linux experts: Where does this weird pop-up come from that I get whenever pressing a deadkey on my keyboard? How can I disable it?

I see this since I updated from Debian 12 to Debian 13. I see it only in a few programs, such as xterm and Emacs, where it often covers text I need to see.
A screenshot of xterm with a popup resulting from typing ALT-`, which is configured as a deadkey on my system.
Reposted by Konrad Hinsen
Reposted by Konrad Hinsen
robin.berjon.com
One thing that fascists understand well is that social and traditional media are both media and need to be controlled to undermine democracy.

Meanwhile, in camp democracy no government seems to fathom this simple fact let alone act in consequence.

We need more strategy less meekness.
revolvingdoordc.bsky.social
Andreessen Horowitz, which will be one of three firms to lead the acquisition of TikTok, is headed by Marc Andreessen, a Silicon Valley tech titan who considered himself to be "an unpaid intern" of Elon Musk's DOGE. But he's not the only major Trump ally involved with this deal
wsj.com
TikTok’s U.S. business would be controlled by an investor consortium including Oracle, Silver Lake and Andreessen Horowitz under a framework the U.S. and China are finalizing.
Reposted by Konrad Hinsen
conradhackett.bsky.social
LINK ROT: 38% webpages that existed in 2013 were no longer available 10 years later.

Even among pages that existed in 2021, 22% no longer accessible just two years later. This is often because individual page was deleted or removed on otherwise functional website.

Many implications for knowledge 🧪
A line chart showing that 38% of webpages from 2013 were no longer accessible one decade later.
khinsen.net
Then "it" was something else than what I want to fork.
Reposted by Konrad Hinsen
tomasp.net
Slides from my talk "Critical Architecture/Software Theory" at PPIG 2025 in Belgrade: tpetricek.github.io/Talks/2025/c...

The talk has been a great excuse to organize some more ideas, on top of my earlier article on the topic: tomasp.net/architecture/
Reposted by Konrad Hinsen
olivia.science
I'm still a bit shocked at the attention because I expected something, but this is immense. 11k views on zenodo!? What? Thank you all! (doi.org/10.5281/zeno...)

Wondering if my new followers are into my research? Shall I do a thread of my work for you all? Do you like papers also like below? 🥰
olivia.science
Oh, gosh! It's out! Delighted with the process at Computational Brain & Behavior; thankful to all especially @irisvanrooij.bsky.social for inviting me to the workshop and Todd Wareham for editing it! Hope you enjoy:

What Makes a Good Theory, and How Do We Make a Theory Good? doi.org/10.1007/s421...
Table 1 in Guest, O. What Makes a Good Theory, and How Do We Make a Theory Good?. Comput Brain Behav (2024). https://doi.org/10.1007/s42113-023-00193-2
Reposted by Konrad Hinsen
melaniemitchell.bsky.social
Very cool postdoc opportunity (at intersection of physics, philosophy, and complex systems) ⬇️
Reposted by Konrad Hinsen
gordon.bsky.social
Good scenario planning doesn’t predict. It…
- Models systemic forces
- Explores how they may collide to generate a wide range possibilities
- Prepares multiple contingency plans

Adaptation beats prediction in VUCA environments.
manlius.bsky.social
Predictions feel safe. But 8m into 2025, some of The Economist’s most confident forecasts are already wobbling.
Why? Because the world is not linear. It’s a tangled web of feedback loops, emergent patterns & path dependencies.

#Complexity isn’t optional.

manlius.substack.com/p/the-past-t...
The paths and loops we miss: complexity lessons from The World Ahead 2025
AI, trade wars and energy shifts aren’t separate stories
manlius.substack.com
Reposted by Konrad Hinsen
robin.berjon.com
"The central axis of this geopolitical struggle will not be the 20th century’s struggle between liberalism and authoritarianism, but a clash over the metabolic basis of modern industrial society."
foreignpolicy.com/2025/09/01/e...

I think this piece is worth your time but I have quibbles & notes. 🧵
The Coming Ecological Cold War
Decarbonization isn’t just about technology and markets—it’s a geopolitical revolution.
foreignpolicy.com
Reposted by Konrad Hinsen
penders.bsky.social
Our data stewards have started recommending that we no longer use US-based infrastructure for #openscience practices, given the risk of (near-future) censorship, from pre-print and data hosting to preregistration and more. That includes OSF.
Reposted by Konrad Hinsen
robin.berjon.com
At the last NGI Forum, when asked what I would like to have next to address the threats of tech authoritarianism, I answered that the first thing I wanted was a European Commissioner in charge of Tech Sovereignty.

Was this too harsh? 🧵
Reposted by Konrad Hinsen