Sam Barrett, PhD
banner
ai4geo.bsky.social
Sam Barrett, PhD
@ai4geo.bsky.social
GeoAI, Climate, Remote Sensing, Generative AI and more!
This is "vibe reading".
November 12, 2025 at 8:59 AM
My biggest takeaway from playing Vic3 is that economies are REALLY weird and non intuitive and full of strange feedbacks and non linear relationships.
November 11, 2025 at 8:03 AM
Academic NLP folks. If you had to review a paper doings something like sentiment analysis in embeddings which used random forests as classifiers on the embeddings and feature importance or SHAP to try and interpret dims relevant to particular semantics, what would your general reaction be?
November 3, 2025 at 10:06 AM
My handle is ai4geo, but I mostly write about that over at LinkedIn... but here's something I just put out about generative AI in Earth Observation: arxiv.org/abs/2510.21813
SITS-DECO: A Generative Decoder Is All You Need For Multitask Satellite Image Time Series Modelling
Earth Observation (EO) Foundation Modelling (FM) holds great promise for simplifying and improving the use of EO data for diverse real-world tasks. However, most existing models require additional ada...
arxiv.org
November 2, 2025 at 7:56 AM
Good distinction on personal interactions. Last night I took an incredibly intellectually productive stroll: I used chatgpt with voice interactions (transcribe/read aloud, not advanced voice mode) in 4 different threads to:
The public web getting choked with low-information LLM-generated blogs filled with the worst of the sycophantic, condescending, 5 paragraph essay style outputs is just a totally different beast than the personal interaction asking an LLM to explain KAM theory with probing and clarifying questions.
November 2, 2025 at 7:52 AM
I just gave GPT-5 pro a manuscript with around 50 references lazily pasted in as urls in the text and asked it to generate the .bib file I'll need when I convert to latex. I'll check it in depth. How many mistakes do we expect?
October 13, 2025 at 4:51 PM
This statement is also pretty relevant in the world of Earth Observation re different sensors and other modalities. And that last sentence very well sums up my own explorations in EO modelling recently, though on a much smaller scale.
More broadly, I think confusion has been created by forming hard distinctions between different modalities, especially between text and sensory data. These distinctions can obscure commonalities. We take the rhetorical stance of erasing the distinctions, and seeing where this leads.

8/9
October 13, 2025 at 1:59 PM
To all the "we know how LLMs work and therefore X" folks - understanding attention and gradient descent doesn't tell you stuff like this. On at least some level we *don't* know how LLMs actually do their thing and are slowly figuring out even just extremely simple things like addition.
We did not tell LLMs, “implement addition using this algorithm.” It learned the algorithm upstream of next-token prediction
October 13, 2025 at 1:48 PM
FFS Pulse! 5 times now!
October 7, 2025 at 9:44 PM
Lovely that ChatGPT Pulse does me a Spanish lesson each day but I already knew "guagua" even before the 1st of the 4 times it's tried to teach me in the last 10 days.
October 6, 2025 at 8:32 AM
If I were a Starfleet recruiter, I'd consider "makes strong claims to knowledge of capabilities and properties of incredibly complex systems based on a mechanistic understanding of the basic components of said systems" a major red flag.
October 5, 2025 at 11:46 PM
Reposted by Sam Barrett, PhD
A common confusion I'm seeing is people mixing levels of analysis wrt neural nets: we understand the implementation level well and the algorithmic level somewhat but not the computational level of "how does it internally compute things."
October 5, 2025 at 3:55 PM
Seems like a good moment to remind people that arguments from analogy are useful for explaining the shape of an argument but not for proving something to be true. Analogies aren't the thing itself, and that is why arguments from analogies are a logical fallacy...
October 5, 2025 at 3:17 PM
ChatGPT Pulse regularly writes me long articles explaining why I should use 'viridis' as a colour map because a couple weeks back I pasted in some code with 'spectral'.
October 3, 2025 at 1:47 PM
This is a fascinating thread about sycophancy. Though I feel like what's described toward the end isn't really sycophancy. In our concern maybe we've too quickly lumped all compliments into one category when some of them have genuine and safe conversational function.
I've been thinking about LLM sycophancy (the tendency of ChatGPT/etc complimenting the user excessively), and wondering if it has more purpose than simple aggrandizement of the user. That is: is it serving other valid conversational purposes? First an example...
September 30, 2025 at 9:17 AM
Reposted by Sam Barrett, PhD
It is really important that environmentalists be numerate.

We are not going to turn off civilization, we are trying to sculpt it into a more benevolent shape, so it is essential that we accurately perceive that shape.
September 28, 2025 at 5:47 PM
I find LLMs (from experience GPT series) are great at riffing on and extending from what you're discussing but there this conceptual cliff. They rarely spontaneously say "there's this adjacent topic which is relevant, do you want to bring that in?". Great teachers DO do that.
September 28, 2025 at 4:04 PM
Me: "I wonder what interesting and thoughtful GenAI discussions I'm missing on the other site"...
*peeks*
*discovers the "bring back 4o" folks*
Me: "I guess not then"
September 28, 2025 at 8:20 AM
ACI is a useful frame because it invites us to lean into the alien "cognitive shape" of AI.
Many people find ways to compliment and extend their cognitive abilities with AI (and many unfortunately don't extend but replace). In that context, what we have now and what would continue to be valuable in the future is ACI: Artificial Complimentary Intelligence...
September 21, 2025 at 8:46 PM
The stochastic parrot is a conceptual prison.
September 21, 2025 at 8:42 PM
AGI and ASI are understandable but actually pretty weird goals for AI research. Both are defined in terms of human capabilities and those capabilities are framed in what's valuable in a human centric society...
September 20, 2025 at 4:26 PM
Reposted by Sam Barrett, PhD
This is a good post on how to get better at using AI. It's the product of pretty intensive research on what consistently works although I don't surface that research directly in it.

Bluesky and AI, so obviously comments on this are off. mikecaulfield.substack.com/p/is-the-llm...
Is the LLM response wrong, or have you just failed to iterate it?
Many "errors" in search-assisted LLMs are not errors at all, but the result of an investigation aborted too soon. Here's how to up your LLM-based verification game by going to round two.
mikecaulfield.substack.com
September 7, 2025 at 6:26 AM
Generative AI isn't a tool I use. It's a place I go to think. A place which stretches and amplifies my thinking and lets me wander where I never would reach alone.
September 7, 2025 at 11:36 PM
"LLMs aren't and can never be [100%] accurate therefore they are useless" - genuinely confused about how people who say this collaborate and interact with other humans...
September 5, 2025 at 1:25 PM
While this is true, I don't think it's the explanation here (see the quote in quote). I suspect you could get that behaviour from a non instruction tuned model though it might be harder to illicit. It's because it has generalised the concept of predicting next tokens...
the real reason it's a poor model is that modern LLM chatbots are not actually modeling next-token prediction

the pretraining objective is "predict the next token", but the post-training objective is closer to "create a response that is correct, properly formatted, and in line with style+safety"
"It just predicts the next token based on data in the training set" is a poor abstraction of LLMs because it makes poor predictions

For example, we might predict that it's very difficult to have an LLM emit the phrase "the quick brown fox jumps over the lazy cvnpmnzq", but it's trivial
September 5, 2025 at 11:30 AM