Bioethicist specializing in ethical and evidence-based integration of technologies in health care. AI Director, Women’s and Children’s Health Network | THRF Clinical Research Fellow, Australian Institute for Machine Learning. Adelaide, South Australia.
The worse part though is that as their costs go up for adding these nonsense features, they inevitably offload it onto us, the consumers. 😡 I’m now looking into old school options again to get away from this stuff.
October 12, 2025 at 9:21 PM
The worse part though is that as their costs go up for adding these nonsense features, they inevitably offload it onto us, the consumers. 😡 I’m now looking into old school options again to get away from this stuff.
Not arrogant - 100% accurate. The very fact that these folks can switch up their buzz words on a dime shows they don’t understand the words in the first place.
October 12, 2025 at 2:34 AM
Not arrogant - 100% accurate. The very fact that these folks can switch up their buzz words on a dime shows they don’t understand the words in the first place.
I've yet to see a single discussion of "Canadian digital sovereignty" include any Indigenous experts, at best we're given a token mention that at some point they'll "consult with Indigenous Peoples" like we're some sort of monolith. Umm, y'all need to consider all our lands, and our distinctiveness.
October 11, 2025 at 3:33 PM
I've yet to see a single discussion of "Canadian digital sovereignty" include any Indigenous experts, at best we're given a token mention that at some point they'll "consult with Indigenous Peoples" like we're some sort of monolith. Umm, y'all need to consider all our lands, and our distinctiveness.
This is a feasible, generalizable approach to evaluating commercial scribe tools to make an informed decision about where they fit (and don’t fit) & building clinical procedures to share the accountability between physicians and their health institution
October 2, 2025 at 12:01 AM
This is a feasible, generalizable approach to evaluating commercial scribe tools to make an informed decision about where they fit (and don’t fit) & building clinical procedures to share the accountability between physicians and their health institution
I like the argument that is glimpsed in @abeba.bsky.social ‘s work - we can oppose using robots in this way w/o the anthropomorphising that is dehumanising. Curious if @abeba.bsky.social you’re expanding some of the argumentation in 2.3 of the paper?
I like the argument that is glimpsed in @abeba.bsky.social ‘s work - we can oppose using robots in this way w/o the anthropomorphising that is dehumanising. Curious if @abeba.bsky.social you’re expanding some of the argumentation in 2.3 of the paper?
Do you know what I find wild tho? Nearly a decade of being an ethicist - raising concerns, questions, challenges in a plethora of contexts has been my job. This is different. I’ve never felt so at risk personally for doing this as I do for LLMs. People get personally offended, get nasty even.
September 22, 2025 at 10:41 PM
Do you know what I find wild tho? Nearly a decade of being an ethicist - raising concerns, questions, challenges in a plethora of contexts has been my job. This is different. I’ve never felt so at risk personally for doing this as I do for LLMs. People get personally offended, get nasty even.
I think it’s kinda fascinating to see the epistemic struggle in this write-up between the beliefs espoused about LLM “capabilities” versus the considerations re ethics violations… 🤷🏼♀️
August 27, 2025 at 12:08 AM
I think it’s kinda fascinating to see the epistemic struggle in this write-up between the beliefs espoused about LLM “capabilities” versus the considerations re ethics violations… 🤷🏼♀️
Always stunning how the scientific standards for study designs and the claims one can make from them get thrown out the window when it involves AI. Any study that is not longitudinal is meaningless given we are now seeing the long term effects of AI use = worse learning and performance.
August 11, 2025 at 1:01 AM
Always stunning how the scientific standards for study designs and the claims one can make from them get thrown out the window when it involves AI. Any study that is not longitudinal is meaningless given we are now seeing the long term effects of AI use = worse learning and performance.