Chris Chapman
@cchapman.bsky.social
810 followers 410 following 450 posts
UX researcher, psychologist. Author "Quantitative User Experience Research" (w/Rodden), "R | Python for Marketing Research and Analytics" (w/Feit & Schwarz). Previously 24 yrs @ Google, Amazon, Microsoft. Personal account. Blog at https://quantuxblog.com
Posts Media Videos Starter Packs
Reposted by Chris Chapman
hagenblix.bsky.social
Advertisers know that what your friends think is far more important to your purchasing decisions than ads are. No wonder they're trying to substitute bots for your friends
jathansadowski.com
What an unsurprising decision by corporations that desperately need to make money with AI. If every other company with a chatbot isn't already doing this, then expect them to be following Meta's lead soon. The chatbot is not your friend; it's a corporate listening device. www.ft.com/content/22f7...

	Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found at https://www.ft.com/tour.
	https://www.ft.com/content/22f7afc3-8ac0-4ca1-9877-fd3f8ddcc986?desktop=true&segmentId=7c8f09b9-9b61-4fbb-9430-9208a9e233c8#myft:notification:daily-email:content

	Meta will use conversations people have with its chatbots to personalise advertising and content across its platforms, in a sign of how tech companies plan to make money from artificial intelligence.

The owner of Facebook, Instagram and WhatsApp on Wednesday said it would use the content of chats with its Meta AI to create advertising recommendations across its suite of apps.

“People will already expect that their Meta AI interactions are being used for these personalisation purposes,” said Christy Harris, privacy and data policy manager at Meta.
cchapman.bsky.social
Corollary: incorrectly generalizing that, because LLMs are (arguably) useful for a few things, therefore LLMs eventually will be useful for many, many other things.

That ability holds for people, but it is incorrect to assume it for LLMs. Success at one use case implies nothing about others.
Reposted by Chris Chapman
colincarlson.bsky.social
Epidemiologists of Bluesky: do you know of any papers that developed statistical models to infer Covid (or something else) incidence or mortality at scale based on survey data about contacts (i.e., “has someone you know died?”)
Reposted by Chris Chapman
juliaallum.bsky.social
Something seasonal to celebrate the beautiful colours around at the moment 🍂
Reposted by Chris Chapman
erikahall.bsky.social
I've realized that a whole lot of smart and well-informed people are overestimating the value and potential of "AI" tools because they underestimate how smart and well-informed they themselves are and how much expertise they bring to the interaction.

It's the curse of knowledge all the way down.
Reposted by Chris Chapman
spavel.bsky.social
What you're getting from an LLM is a statistical approximation of what the answer would be to a statistical approximation of the question.

This is why prompt engineering is fake. The prompt (and any input data) is just a suggestion - and helps fool you into thinking that your question was answered.
Reposted by Chris Chapman
alanau.bsky.social
Had a long discussion about gen AI today. Are there valid use cases for the tech? Yes. Is the current implementation problematic? Yes. Is it being oversold? Yes.

Do the benefits outweigh the harms? ... I suspect not, but I cannot know because the true costs are hidden, and this is a huge problem.
Reposted by Chris Chapman
angierasmussen.bsky.social
So this interview lasted 2 hours so this “you’re scaring me” part might seem like an overreaction or fearmongering to someone without that context.

There’s a lot of evidence to support my hypothesis that a potential H5N1 pandemic would be worse than COVID.
Reposted by Chris Chapman
meemalee.bsky.social
Author and filmmaker Justine Bateman on generative AI
"They're trying to convince people they can't do the things they've been doing easily for years - to write emails, to write a presentation. Your daughter wants you to make up a bedtime story about puppies - to write that for you." We will get to the point, she says with a grim laugh, "that you will essentially become just a skin bag of organs and bones, nothing else. You won't know anything and you will be told repeatedly that you can't do it, which is the opposite of what life has to offer. Capitulating all kinds of decisions like where to go on vacation, what to wear today, who to date, what to eat.
People are already doing this. You won't have to process grief, because you'll have uploaded photos and voice messages from your mother who just died, and then she can talk to you via AI video call every day. One of the ways it's going to destroy humans, long before there's a nuclear disaster, is going to be the emotional hollowing-out of people." - author and filmmaker Justine Bateman from a piece by Emine Saner for the Guardian
Reposted by Chris Chapman
Reposted by Chris Chapman
maartenvsmeden.bsky.social
Kind reminder: data driven variable selection (e.g. forward/stepwise/univariable screening) makes things *worse* for most analytical goals
cchapman.bsky.social
They are making such bets with other people's money (and time, IP, attention, electricity, water, etc) ... so it is never "wrong" for them.

"Heads, I win; tails, you lose! (and I'll get promoted because of my so called hard-won experience)"
cchapman.bsky.social
Although I would include a somewhat different set of considerations (e.g. the roles of compassion and intentionality), this is the most clarifying and tech-fantasy-debunking paper I've read in this space.

Well worth reading for anyone interested in ethics towards AI/robots/etc.
abeba.bsky.social
Robot personhood/rights is conceptually bogus and legally puts more power/rights in the hands of those that develop and deploy robots/AI systems

firstmonday.org/ojs/index.ph...
Reposted by Chris Chapman
richarddmorey.bsky.social
Also - contrast b/w the response when I advocate teaching R instead of SPSS -- "No hurry, let's not rush into it" (still waiting) -- & others re: use of LLMs -- "It's inevitable, we're behind; need it implement it ASAP!" -- is telling. Learning to code is freeing. Overhyped LLMs create dependency.
Excerpt from Guest & van Rooij, 2025:

As Danielle Navarro (2015) says about shortcuts through us-
ing inappropriate technology, which chatbots are, we end up dig-
ging ourselves into “a very deep hole.” She goes on to explain:

"The business model here is to suck you in during
your student days, and then leave you dependent on
their tools when you go out into the real world. [...]
And you can avoid it: if you make use of packages
like R that are open source and free, you never get
trapped having to pay exorbitant licensing fees." (pp.
37–38)
Reposted by Chris Chapman
doctorwaffle.substack.com
In honor of National Poetry Day, the greatest parody rewrite of all time:
Screen cap of parodic version of William Blake's "The Tyger" that begins:
Tyger! Tyger! Burning bright
(Not sure if I spelled that right) 
What immortal hand or eye
Could fashion such a stripy guy? 
What the hammer that hath hewn it 
Into such a chonky unit?
Did who made the lamb make thee, 
Or an external franchisee?
Reposted by Chris Chapman
wblau.bsky.social
Spot the North-American anomaly: only region where social media use is still growing.
Great work by the FT’s @jburnmurdoch.ft.com
www.ft.com/content/a072... “Have we passed peak social media?”
Reposted by Chris Chapman
mrlockyer.bsky.social
Let me make your Sunday. Got a library card? Great.

Download the Libby app. Free.

Up to 10 audiobooks. FREE (cancel Audible).

Up to 10 e books. FREE.
(Cancel Kindle Prime).

UNLIMITED high street magazines (I chose Empire, RW, Wired, Simple Things to start).

NEWSPAPERS!

This app is AMAZING.
cchapman.bsky.social
Small preview: "far from being an unstoppable force, [AI] is irrevocably shaped ... by the ownership class that steers its development and deployment.... The technology of AI is ultimately not that complex. It is insidious, however, in its capacity to steer results to its owners’ wants and ends."
cchapman.bsky.social
Sounds great! And I suggest the book "Why we fear AI" by @hagenblix.bsky.social and I. Glimmer if not already on the list.

In a nutshell, it discusses how the social & economic patterns of late capitalism (anti labor, anti knowledge knowledge; but pro fear) show up in technology, i.e. AI.
cchapman.bsky.social
A much needed reflection on the crisis of rational thought today: www.theguardian.com/news/2025/oc...

As a side note on reason itself, the article's attention to Arendt on imagination aligns with the unique aspects of Pierce's abductive reasoning (as complementary to inductive and deductive forms)
A critique of pure stupidity: understanding Trump 2.0
If the first term of Donald Trump provoked anxiety over the fate of objective knowledge, the second has led to claims we live in a world-historical age of stupid, accelerated by big tech. But might th...
www.theguardian.com
Reposted by Chris Chapman
bharrap.bsky.social
Are you a student or early-career statisticians & data scientists in the Sydney area? Come hear from a diverse panel on their experience and career trajectories! Food and drink to follow

📅 9th Oct, 6-8pm Sydney time
📍 USyd

Registration required:
statsoc.org.au/event-6332903

#statssky #databs
Statistical Society of Australia - SSA NSW: Early Career and Student Statisticians Career Event 2025
statsoc.org.au
Reposted by Chris Chapman
trekkiebill.bsky.social
Do you ever think about the fact that Wikipedia is the last good major website on the internet? You aren't bombarded with ads. It doesn't try to push video on you, and it doesn't redirect you to a scam site.
Reposted by Chris Chapman
kenwhite.bsky.social
Every few months now I re-read this "Who Goes Nazi?" piece from 1941 and am blown away by how it captures the people we are dealing with 80 years later.

harpers.org/archive/1941...
Who Goes Nazi?, by Dorothy Thompson
harpers.org
cchapman.bsky.social
Impressive work, esp. combining the two papers!

Last week I spoke at Google's (internal) Survey Con about why "Synthetic Survey Data is Not Data".

One audience question: would the estimates get better by adding more data?

My response was 🤷 "maybe" 🤷 ... but this is a much better answer!
verasight.bsky.social
In Verasight’s second synthetic data paper, @gelliottmorris.com, Ben Leff and @peterenns.bsky.social find that the performance of synthetic samples does not consistently improve (and can perform worse) with additional administrative data or real survey responses. Link in the thread.
Reposted by Chris Chapman