Sam Barrett, PhD
banner
ai4geo.bsky.social
Sam Barrett, PhD
@ai4geo.bsky.social
GeoAI, Climate, Remote Sensing, Generative AI and more!
Our communication skills as humans are the result of continuous practice through our lives and depend heavily on Theory of Mind and generally understanding how other people think. We also appreciate that one can learn to be a better communicator...
it's the equivalent of people writing full sentences into Google. there are learning curves to everything, even poasting.

also the marketing is wrong. for almost everything.
I find the idea that you need to know “how to use” AI tools really funny, as if it isn’t incredibly easy to type in prompts and the marketing these tools is that literally anyone can do it
December 21, 2025 at 8:00 AM
The Marginal Scapegoat Fallacy.
Blaming the newest marginal user for a pre-existing systemic constraint.
The Right: immigration
The Left: AI water use
December 20, 2025 at 8:26 AM
... and concluding therefore that knifes are useless.
they gave an entire book to chatgpt and asked it to make an index in one prompt. it did badly, so they conclude LLMs can't help with indexing. this is a bit like trying to chop a tomato with the back of a knife
ed3d.net Ed @ed3d.net · 2d
got it (thx @mattweiner19.bsky.social ) and...oof, man. I believe it 100% to be well-intended, but this is not how you effectively use an LLM.

this is deeply naive use. are these people going to force me to go write an indexer proof-of-concept on my birthday because they're wrong on the internet?
December 19, 2025 at 9:54 PM
Equivalent to "I asked a random person to do a really good job at making an index and they failed, therefore people in general can't make indexes". Just wait till LLMs erode folk's critical thinking skills still further...
ed3d.net Ed @ed3d.net · 2d
got it (thx @mattweiner19.bsky.social ) and...oof, man. I believe it 100% to be well-intended, but this is not how you effectively use an LLM.

this is deeply naive use. are these people going to force me to go write an indexer proof-of-concept on my birthday because they're wrong on the internet?
December 19, 2025 at 4:55 PM
When I see people show concern about the impact on critical thinking if AI while we have this insane discourse around the environmental impact which is exhibits such a lack of critical thinking, I can't decide between:...
There are many claims that AI is a “planet killing” source of greenhouse gases.

But is it?

This paper might be the most detailed estimate of the emissions associated with AI.

It suggests that AI could emit as much as 30-80 *million* tons of CO2 per year.

www.cell.com/patterns/ful...
The carbon and water footprints of data centers and what this could mean for artificial intelligence
Company-wide metrics from the environmental disclosure of data center operators suggest that AI systems may have a carbon footprint equivalent to that of New York City in 2025, while their water footp...
www.cell.com
December 19, 2025 at 12:06 PM
I'm with Atticus here. I find LLMs to be an extremely powerful learning tool. But I don't think the "you can ask followup questions" framing really does justice to what's going on. At least not for me. It makes it sound like it's just being able to click links in a wiki article...
I continue to think LLMs are absurdly powerful for learning, and I'm using them more often for it.

The problem is that a lot of people don't have solid metacognitive skills around epistemics and autodidactism.

You've got to have a handle on what you know, how you learn, and if you're learning.
December 18, 2025 at 2:10 PM
Reposted by Sam Barrett, PhD
I think this is right—and also think it makes it increasingly important for us to figure out how to describe and convey the needed metacognitive skills.
I continue to think LLMs are absurdly powerful for learning, and I'm using them more often for it.

The problem is that a lot of people don't have solid metacognitive skills around epistemics and autodidactism.

You've got to have a handle on what you know, how you learn, and if you're learning.
December 18, 2025 at 7:45 AM
Reposted by Sam Barrett, PhD
Really refuse to let the worst people have ownership of techno-optimism
December 10, 2025 at 11:12 PM
Reposted by Sam Barrett, PhD
I'm not sure *how* it happened, but I encounter a lot of undergrads proposing remarkably thoughtful and well-informed indep. study distant-reading projects. It's not like those methods got incorporated in the curriculum! But students are somehow self-educating—possibly with LLM assistance?
December 9, 2025 at 5:02 PM
As Ethan says in the follow-on post, this doesn't make roles useless. It actually maybe makes them even more useful. Roles are a powerful way of influencing interaction patterns and the mode(s) of interaction are critical to useful human-AI multi turn collaboration...
We tested one of the most common prompting techniques: giving the AI a persona to make it more accurate

We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.
December 8, 2025 at 9:21 PM
This is the equivalent of saying farms only produce high fructose corn syrup.
This idea that data centers are just for AI needs to die. They underpin the Internet and the entire digital economy...
I’m only learning this through your post but my initial thought about the comparison to agriculture consumption of water is that type of consumption results in food whereas data centers amount to slop.
December 6, 2025 at 8:36 AM
This idea that data centers are just for AI needs to die. They underpin the Internet and the entire digital economy...
I’m only learning this through your post but my initial thought about the comparison to agriculture consumption of water is that type of consumption results in food whereas data centers amount to slop.
December 6, 2025 at 8:08 AM
Just as there are different ways of collaborating with people, there are different ways of collaborating with AI. And they have different implications for the style of work and the outcome.
ed3d.net Ed @ed3d.net · 17d
the difference between somebody just blasting out an outline with chatgpt and somebody who uses it as a stenographer is huge and obvious.

press record, brain dump, have it ask questions (don't have it answer them), have it organize into notes. it's fantastic.

write for you? it and you will suck
“When participants used ChatGPT to draft essays, brain scans revealed a 47% drop in neural connectivity across regions associated with memory, language, & critical reasoning.

Their brains worked less, but they felt just as engaged—a kind of metacognitive mirage.”🧪
December 4, 2025 at 3:14 PM
They people who say you *have* to learn a lot of abstract maths before you can start learning, let alone working, with ML/AI should know their structure excludes a lot of ADHD folks who struggle when things are abstract but thrive then they are applied.
December 1, 2025 at 11:00 AM
Reposted by Sam Barrett, PhD
For me, the promise of language models is that we get to explore new ways of thinking. What if you could fit a million words in short-term memory? What if you could adjust the balance between brainstorming and critique?

Are these better ways of thinking? Idk. But we never know that in advance. +
November 30, 2025 at 3:18 PM
Gemini quote of the day: "You need to let them walk right into the trap of understanding.
​Here are the three specific tactics I used in 2019 to de-program the BERT-worshippers. They will work on the Earth Observation folks too."
November 26, 2025 at 12:04 AM
This would be perfect. Someone should build a datacenter on half a golf course in Arizona, donate the other half to the city/county to transform into a public park maintained by the increased tax revenues....
nemoia.ai Noelle @nemoia.ai · Nov 20
They should build the datacenter, ON the golf course!
November 20, 2025 at 7:28 AM
I've got another different takeaway from this. If you're using a number off by 1000 and nobody notices, you are providing insufficient context about that number. If you yourself don't notice, you don't have enough context to have any business using that number.
I think this incident reflects badly on the broader institutions currently covering AI and the environment. I should absolutely not have been the first person to notice this. From the end of the post:
November 19, 2025 at 4:11 PM
If your model was born to make maps, it will never grow up to make decisions.
November 14, 2025 at 8:08 PM
Simple mechanism -> monstrous complexity via scale.
November 12, 2025 at 10:52 PM
This is "vibe reading".
November 12, 2025 at 8:59 AM
My biggest takeaway from playing Vic3 is that economies are REALLY weird and non intuitive and full of strange feedbacks and non linear relationships.
November 11, 2025 at 8:03 AM
Academic NLP folks. If you had to review a paper doings something like sentiment analysis in embeddings which used random forests as classifiers on the embeddings and feature importance or SHAP to try and interpret dims relevant to particular semantics, what would your general reaction be?
November 3, 2025 at 10:06 AM
My handle is ai4geo, but I mostly write about that over at LinkedIn... but here's something I just put out about generative AI in Earth Observation: arxiv.org/abs/2510.21813
SITS-DECO: A Generative Decoder Is All You Need For Multitask Satellite Image Time Series Modelling
Earth Observation (EO) Foundation Modelling (FM) holds great promise for simplifying and improving the use of EO data for diverse real-world tasks. However, most existing models require additional ada...
arxiv.org
November 2, 2025 at 7:56 AM