Joakim Wernberg
wernberg.bsky.social
Joakim Wernberg
@wernberg.bsky.social
PhD, interested in technology and economics. Assistant Professor at Lund University, Socioeconomic Technology Studies (SoeTech), and Research Director at the Swedish Entrepreneurship Forum.
Sverige riskerar att fastna i en jobbförstörelsefälla, men inte för att AI tar alla jobb utan för att befintliga institutioner och politik hindrar framväxten av nya jobb. Om detta skriver jag och @drbergh.bsky.social på DN Debatt:
DN Debatt. ”AI kommer att ta våra jobb – så här skapar vi nya”
DN Debatt. Den nya tekniken kommer göra att vissa jobb försvinner, men den kan också skapa nya, skriver två samhällsforskare.
www.dn.se
May 5, 2025 at 6:41 AM
Reposted by Joakim Wernberg
Har poddat igen, denna gång om Northvolt, Theranos och Uniti. Vi kommer aldrig att kunna undvika att bubblor blåses upp för att sedan brista, men här är några tankar kring hur problemet kan mildras. shows.acast.com/berghwernber...
147: Önsketänkande och oönskat entreprenörskap | Bergh & Wernberg
shows.acast.com
March 24, 2025 at 5:18 PM
Reposted by Joakim Wernberg
Check it out for cool plots like this about how affinities between words in sentences and how they can show how Green Day isn't like green paint or green tea. And congrats to @coryshain.bsky.social and the CLiMB lab! climblab.org
March 11, 2025 at 8:04 PM
Finns det en övergripande ideologi i Silicon Valley och är den i så fall mer libertariansk eller teknokratisk? Och vad spelar det för roll för debatter om teknikens samhällspåverkan? De här frågorna har jag funderat på ett tag och nu har jag och @drbergh.bsky.social provtänkt lite om saken.
146: Ideologiska vindar i Silicon Valley
Bergh & Wernberg · Episode
open.spotify.com
March 11, 2025 at 7:12 AM
Är Sverige på väg att hamna hopplöst efter i AI-omställningen? Och är industripolitik mer OK om det handlar om AI? I senaste Bergh & Wernberg (med @drbergh.bsky.social ) diskuterar vi AI-kommissionens slutrapport och jag förklarar varför jag menar att svaret på båda frågorna är nej.
142: AI-kommissionen | Bergh & Wernberg
shows.acast.com
January 14, 2025 at 12:08 PM
Reposted by Joakim Wernberg
If you are an academic, it can be instructive to work on a paper with AI. Pretend you are working with a grad student & see what happens.

Generally o1 is best for well-defined heavy intellectual tasks, Gemini for synthesizing lots of text, and Claude for writing & theorizing. This varies by field.
January 12, 2025 at 7:18 PM
Reposted by Joakim Wernberg
Hur går det när de duktiga ska avslöja att de coola har fel? I nya #berghwernberg pratar vi om Henrik Jönsson, Ny demokrati och amerikanska presidenter - med mera. Lyssna här: shows.acast.com/berghwernber...
141: Coolhet, duktighet, fakta och känsla i debatten | Bergh & Wernberg
shows.acast.com
December 26, 2024 at 10:05 AM
Mellandagslyssningstips!
I senaste avsnittet av Bergh och Wernberg pratar jag och @drbergh.bsky.social om varför coola debattörer med känslobaserade argument som har lite rätt återkommande klyver samhällsdebatten:
141: Coolhet, duktighet, fakta och känsla i debatten | Bergh & Wernberg
shows.acast.com
December 26, 2024 at 9:44 AM
While OpenAI’s o3 ARC/AGI test scores certainly are impressive, I strongly recommend reading @fchollet.bsky.social ’s thread on X (corresponding posts here are not as elaborate at this time) about how this relates to AGI, bottle necks and future expectations on AI:
François Chollet on X: "Today OpenAI announced o3, its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI, and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks. It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task https://t.co/ESQ9CNVCEA" / X
Today OpenAI announced o3, its next-gen reasoning model. We've worked with OpenAI to test it on ARC-AGI, and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks. It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task https://t.co/ESQ9CNVCEA
x.com
December 20, 2024 at 9:34 PM
From what I’ve gathered so far o3 is not just brute force (although the compute costs suggest a lot of it). It does not appear to be just returns to scale either, which speaks to the oeiginal intention behind the ARC/AGI challenge: to incentivize a wider variery of approaches to AI development.
If you're following the OpenAI o3 announcements and are curious about the "ARC-AGI" benchmark and why I think solving these tasks by brute-force compute defeats the original purpose, here are some past posts about this from my Substack: (1/3)
December 20, 2024 at 9:27 PM
Reposted by Joakim Wernberg
I suspect that linking back to twitter is not done here, but this is a fascinating look into a case of how LLMs end up doing weird things that is pretty illuminating.

Specifically, why does an LLM constrained to only be able to use words from the Bible keep saying “ouches”? x.com/voooooogel/s...
December 8, 2024 at 1:20 AM
Samtalar med Mathias Sundin från AI-kommissionen om deras slutrapport, förslagen de kommer med samt förhållandet mellan stora planer och marknadskrafter vid teknikskiften, allt under ledning av Andreas Ericson i SvDs ledarpodd: www.svd.se/a/VzoPWr/sa-...
Så ska Sverige bli bättre på AI | SvD Ledare
LEDARE. PODD | 2 december. Behöver vi politisk styrning för att komma ikapp de länder som idag är världsledande?
www.svd.se
December 3, 2024 at 10:21 AM
Reposted by Joakim Wernberg
We're squeezing in one final seminar this term. Max Greenberg from UMass will be discussing the rise of 'hard to contract for' jobs and its implications for inequality on 5 December (online and in person). #Inequality #LaborMarkets
www.inet.ox.ac.uk/events/the-r...
The rise of 'hard to contract for' jobs and its implications for…
economy has been reorienting away from jobs with routine workflows, which are easy to write complete contracts for, and towards jobs where the nature of…
www.inet.ox.ac.uk
November 29, 2024 at 1:24 PM
Reposted by Joakim Wernberg
Nu har @wernberg.bsky.social och jag varit på konferens med Philosophy, Politics & Economics Association, och funderat lite på vart nationalekonomin borde ta vägen nu. Det blev ett podd-avsnitt: shows.acast.com/berghwernber...
140: Vart borde nationalekonomisk forskning ta vägen nu? | Bergh & Wernberg
shows.acast.com
November 29, 2024 at 8:25 AM
Reposted by Joakim Wernberg
The thing that is hard to get about LLMs is that we expected AI to be awesome at math & be all cool logic.

Instead, AI is best at human-like tasks (eg writing) & is all hot, weird simulated emotion. For example, if you make GPT-3.5 “anxious,” it changes its behavior! arxiv.org/abs/2304.11111
November 27, 2024 at 2:51 AM
Reposted by Joakim Wernberg
Fascinating: In 2-hour sprints, AI agents outperform human experts at ML engineering tasks like optimizing GPU kernel. But humans pull ahead over longer periods - scoring 2x better at 32 hours. AI is faster but struggles with creative, long-term problem solving (for now?). metr.org/blog/2024-11...
November 23, 2024 at 8:20 PM
Reposted by Joakim Wernberg
New paper from Martha Lewis and me:

"Evaluating the Robustness of Analogical Reasoning in Large Language Models"

Preprint:
arxiv.org/pdf/2411.14215

This is a much-extended follow-up on our earlier pre-print on "counterfactual tasks" in letter-string analogies.

🧵
arxiv.org
November 22, 2024 at 2:32 PM
Reposted by Joakim Wernberg
For Science Magazine, I wrote about "The Metaphors of Artificial Intelligence".

The way you conceptualize AI systems affects how you interact with them, do science on them, and create policy and apply laws to them.

Hope you will check it out!

www.science.org/doi/full/10....
The metaphors of artificial intelligence
A few months after ChatGPT was released, the neural network pioneer Terrence Sejnowski wrote about coming to grips with the shock of what large language models (LLMs) could do: “Something is beginning...
www.science.org
November 14, 2024 at 10:56 PM