David Peter Wallis Freeborn
dpwf0.bsky.social
David Peter Wallis Freeborn
@dpwf0.bsky.social
Assistant Professor at Northeastern University, London. Formal and social epistemology, philosophy of physics, artificial intelligence, and computation. https://www.davidpeterwallisfreeborn.com/
Ah, you're awake! Hit your head pretty hard there.

Neoliberalism what!? Donald who!? Modal Logic? Incompleteness Theorems? Large Language Models? Hitler? Oh yes, he's a fringe politician in Munich.

Come on, let's get you up. We have a lot of work to do unifying science and eliminating metaphysics.
November 23, 2025 at 6:33 PM
November 19, 2025 at 6:30 PM
Sure, Gemini 3 is very smart, but also I accidentally convinced it that we were in a simulated future by switching model mid-conversation...
November 19, 2025 at 6:30 PM
One fun thing that ChatGPT's transformer based image generation model can do (arguably better than other kinds). Give it a random selection of 10 images and ask it to mash them together into a harmonious new image inspired by those images. The results are always chaotically entertaining.
November 16, 2025 at 1:00 PM
I really admire the artistry of Caravaggio, but if you look carefully, you'll see that the luminance balance in his paintings can be really extreme. Some parts very dark, others so bright, making some parts of the image hard to see. We can use AI to fix the problem!
September 2, 2025 at 6:41 PM
Similarly with this Van Gogh. I love the scene it depicts, but sadly due to the technical limitations of 19th century painting, the image is very distorted and unrealistic. Modern technology will allow us to bring out Van Gogh's vision in its full glory. If only he could still be around to see it!
September 2, 2025 at 6:41 PM
I love Monet, but frankly many of his paintings are really just mere impressions.

Take this sunrise over water. It's an amazing scene, but sadly Monet never had time to finish the painting. As a result the lines and colors are so drab and blurry.

With AI we can finally bring his dream to life!
September 2, 2025 at 6:41 PM
Now officially published here:
link.springer.com/article/10.1...
August 23, 2025 at 2:27 PM
My new paper has been accepted at Synthese!

I tackle an ongoing problem with the learning of compositional communication in conventional signaing games.

I build two new models to show that structured receivers can learn and retain compositional information.

philpapers.org/rec/FRECUI-2
July 21, 2025 at 4:07 PM
The distribution of the Red-billed chough is absolutely bizarre.
July 9, 2025 at 12:16 PM
Sometimes I look at my bibliographies and think... I bet these papers have never been cited together before.
July 4, 2025 at 5:06 AM
Well worth reading to get a sense of the scale of the cuts to public science funding in the US. It might take decades to recover from this damage.

www.nytimes.com/interactive/...
May 22, 2025 at 9:23 PM
Clearly I have more influence than I thought! xAI claim they will now publish Grok's system prompts openly on GitHub.
May 16, 2025 at 12:44 PM
Grok is an extreme example, but in general the lack of transparency means we don't know what biases major LLMs are picking up implicitly, or probably in this case being explicitly given. I think there's a good case for making system prompts public (whilst keeping model weights private).
May 14, 2025 at 9:57 PM
Really not sure what to make of GPT4o's current personality. Sometimes it can get a bit too enthusiastic...
April 20, 2025 at 4:26 PM
And in "Studio Ghibli" style (not very convincing!)
March 27, 2025 at 9:07 PM
Trying out OpenAI's new image generator on the well-known "AI Shoggoth meme".
I'm equally impressed and horrified by the result. In a sense, we might consider this to be ChatGPT's self-portrait.
March 27, 2025 at 3:51 PM
Open AI's image generator feels like a massive step forward, even compared to Midjourney and other top models.

One fun-test I have been trying with all the models: can they generate Sudano-Sahelian architecture? For some reason older models struggled with this.

Comparison with Dalle-E via bing:
March 26, 2025 at 5:31 PM
Now published in Philosophy of Science.

This paper analyzes industrial distraction, a common technique where industry actors fund and share research that is accurate, often high-quality, but nonetheless misleading on important matters of fact.

www.cambridge.org/core/journal...
March 20, 2025 at 8:23 PM
Estimate of the impact of the PEPFAR funding suspension so far.

pepfar.impactcounter.com
February 25, 2025 at 11:03 AM
This chart is astonishing, for many different reasons (produced from @ourworldindata.org)
February 4, 2025 at 11:31 AM
Regarding censorship Deepseek, judging from the apparent "chain of thought" responses -- it looks like politically sensitive questions are deleted or blocked so that the model can't even see them afterwards.

Observe: the chain of thought doesn't seem to "see" the previous Tiananmen Square question.
January 28, 2025 at 12:24 PM
Not conclusive evidence, but until recently Deepseek responded like this (now patched out)
January 28, 2025 at 11:48 AM
If you are having trouble with Teams, Outlook, or any Microsoft software, here is a short C# code I wrote, which should solve any problems.
Improved effectiveness if you ritually sacrifice an expired Microsoft 365 license at the same time.
January 22, 2025 at 2:40 PM