S. Scott Graham
@sscottgraham.bsky.social
760 followers 430 following 410 posts
Writing about health, ethics, COI, rhetoric, AI, and anatomy museums (usually only 2-3 in any given act of writing) at UT-Austin. https://sscottgraham.com
Posts Media Videos Starter Packs
Pinned
sscottgraham.bsky.social
Sharing a little bit more about my NEH-sponsored research on anatomical museums today. I have no idea if this project will retain funding, but I am incredibly grateful for the NEH program officers and reviewers who have made so much humanities research possible. sscottgraham.com/archives/911
Entrance to Musem Vrolik. White brick corridor with fat trim black outlined doorway. Blue sign of museum name and posters of anatomical specimens on the wall.
sscottgraham.bsky.social
Just got falsely accused of using AI for something I worked hard on. Stop doing this! I know you think you can tell, but I promise you can't.
sscottgraham.bsky.social
My first LLM test was a peer teaching observation letter. The output was indistinguishable from human. It wasn't accurate to what happened in the classroom that day, but the impact of the letter on annual review or P&T would have been no different than the real one. (I submitted the real one!)
sscottgraham.bsky.social
There's a lot of research on prompting to improve outputs. Some is remarkably reliable. It might look like the prompt is trying to reach the LLM's "mind," but that's a dangerous assumption for both prompter & critic. Just b/c that's not happening, doesn't mean there aren't measurable improvements.
sscottgraham.bsky.social
While it's certainly true that LLMs will never stop hallucinating, that doesn't mean prompting can't reduce hallucination rates. Medical researchers, in particular, are very used to provoking improvement empirically & measuring responses, all under the expectation that perfection is unattainable.
Trapping LLM Hallucinations Using Tagged Context Prompts
Recent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents. However, these models suffer from "hallucinations," where the model generates fa...
arxiv.org
Reposted by S. Scott Graham
aktange.bsky.social
My latest: "Numerous studies show that majors in the humanities—typically, in departments of English, history, philosophy, religious studies, classics and languages—lead students to employment and life satisfaction outcomes as positive as those for majors traditionally championed as 'practical.'”
Counterpoint | Minnesota humanities graduates thrive in meaningful careers
"The stereotype of the underemployed history major is simply not true," professor Andrea Kaston Tange writes.
www.startribune.com
Reposted by S. Scott Graham
bleary.off-the-records.com
If anyone needs me I will be in the museum, lying down next to the bog bodies.
Did people really memorize phone numbers before cell phones, or is that just a movie thing?
2? Questions
I was watching some old shows from the 90s and noticed people would just dial numbers from memory - like they'd call their friends or family without looking anything up.
Made me wonder if that was actually normal back then? Did people genuinely have all their important numbers memorized, or did most folks keep a little address book or written list nearby?
Reposted by S. Scott Graham
profgoldberg.bsky.social
👀

I love being an AE for a journal that has SIs like this!

(I am the AE for Law & Bioethics at this journal but have absolutely nothing to do with this SI; I just think it’s cool!)
reginamueller.bsky.social
We’re excited to share a CfP for a Special Issue “Bioethics and Structural (In)justice” in Bioethics! We invite articles from all disciplines. Deadline: September 1, 2026
We can’t wait to read your contributions!
Regina Müller, Mirjam Faissner, Isabella Marcinski-Michel & Stefanie Weigold
Reposted by S. Scott Graham
axdouglas.bsky.social
I wonder if Dutch philosophy departments in the 1630s were only hiring in the philosophy of tulips.
saraluckelman.bsky.social
One might also say "hiring committees".
sscottgraham.bsky.social
This is giving me flashbacks to 1991 when every news story about the breakup of the USSR featured b-roll of breadlines.
drewharwell.com
Military families on day 9 of the shutdown lining up at the food bank
sscottgraham.bsky.social
A non-trivial subset seems to mainly be a way of making efforts to induce trust look like efforts to demonstrate trustworthiness
sscottgraham.bsky.social
I love this story with my entire heart and soul. www.bbc.com/news/article...
US scientist Dr Fred Ramsdell was on the last day of a three-week hike with his wife Laura O'Neill and their two dogs, deep in Montana's grizzly bear country, when Ms O'Neill suddenly started screaming.

But it was not a predator that had disturbed the quiet of their off-grid holiday: it was a flurry of text messages bearing the news that Dr Ramsdell had won the Nobel Prize for medicine.

Dr Ramsdell, whose phone had been on airplane mode when the Nobel committee tried to call him, told the BBC's Newshour Programme that his first response when his wife said, "You've won the Nobel prize" was: "I did not."
sscottgraham.bsky.social
Impressive results, but the lack of academic or student writing in the test sets are a bit of a red flag for me. I see from your other post that you don't rely on it exclusively, and I appreciate that.
sscottgraham.bsky.social
Many faculty are confident they can identify AI submissions. The data mostly says otherwise. Some students are confident they can identify AI-generated faculty feedback. Anyone seen any studies on this yet? I suspect this might be easier since the individualization gap is probably greater.
Reposted by S. Scott Graham
jsnyder.bsky.social
Great piece on the massive conflicts of interest when private equity firms own IRBs that they then use to assess drug trials for their own companies.

@hollylynchez.bsky.social: "If you are just focused on turnaround time, that doesn’t tell you really anything about quality.”
How Private Equity Oversees the Ethics of Drug Research
www.nytimes.com
sscottgraham.bsky.social
Jennifer for me 🤷‍♂️
sscottgraham.bsky.social
I can think if six different ways to agree with you here
Reposted by S. Scott Graham
numb.comfortab.ly
Bluesky feed is like

------------------
we're doomed
------------------
we're doomed
------------------
we're doomed
------------------
cute kitty!
------------------
we're doomed
------------------
we're doomed
------------------
[adult content]
------------------
we're doomed
------------------
sscottgraham.bsky.social
This is the next iteration of “participants were 138 undergrads looking for extra credit it psych 102”
jamiecummins.bsky.social
Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD 🧵
The threat of analytic flexibility in using large language models to simulate human data: A call to attention
Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...
arxiv.org
Reposted by S. Scott Graham
djvanness.bsky.social
A lot of people think that every international student admitted means one fewer spot for domestic students, when the opposite is more likely true - the tuition revenue international students bring allows public universities to provide substantial discounts to domestic students, improving access.
sscottgraham.bsky.social
Was the argument more about the first L or the second L?
sscottgraham.bsky.social
Do historians have established guidelines for when to use and not use parenthetical lifespans, e.g. "Herman Boerhaave (1668-1738)"? I can't discern the rules from reading and google's search disaster just wants to give me population statistics.