Dr Brett H Meyer
banner
bretthmeyer.bsky.social
Dr Brett H Meyer
@bretthmeyer.bsky.social
We live because everything else does.
—Richard Wagamese

He/him, Professor of ECE, researching hardware-software co-design of machine learning systems; 🇺🇸 in 🇨🇦! Views expressed are my own.

https://rssl.ece.mcgill.ca/
Pinned
There’s something delightful about starting over.
I’d like the “too close to the sun” package, please! Thank you.
November 14, 2025 at 2:37 AM
Reposted by Dr Brett H Meyer
Every ad now
November 13, 2025 at 5:38 PM
Reposted by Dr Brett H Meyer
New: Google has chosen a side in Trump's mass deportation campaign. Google is hosting a CBP facial recognition app to hunt immigrants; no indication Google will remove. At same time Google takes down apps for reporting ICE sightings

“Big tech has made their choice”

www.404media.co/google-has-c...
Google Has Chosen a Side in Trump's Mass Deportation Effort
Google is hosting a CBP app that uses facial recognition to identify immigrants, while simultaneously removing apps that report the location of ICE officials because Google sees ICE as a vulnerable gr...
www.404media.co
November 13, 2025 at 2:11 PM
Reposted by Dr Brett H Meyer
We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.
November 11, 2025 at 10:15 PM
When you wake up and think that it’s a warm January day rather than a cold November day.
November 11, 2025 at 9:35 PM
Reposted by Dr Brett H Meyer
I am simply unwilling to rely on these black boxes to generate anything on my behalf or for my use, nor am I willing to feed it my thinking or writing in service of unstated goals and potentially immoral uses. Some of the generative AI tools out there are potentially useful, sure, but at what cost?
“What ChatGPT says about politics — or anything — is ultimately what the people who created it say it should say, or allow it to say; more specifically, human beings at OpenAI are deciding what neutral answers to those 500 prompts might look like and instructing their model to follow their lead.”
Elon Musk’s Grokipedia Is a Warning
The centibillionaire’s Wikipedia clone is ridiculous. It’s also a glimpse of the future.
nymag.com
November 11, 2025 at 2:08 PM
Honestly, sometimes I really miss my math dork days …
Haha, this from the New Yorker is getting passed around the math dork community. I did a comic about this kind of thought a few years ago: www.smbc-comics.com/comic/commut...
November 8, 2025 at 4:23 PM
Reposted by Dr Brett H Meyer
Haha, this from the New Yorker is getting passed around the math dork community. I did a comic about this kind of thought a few years ago: www.smbc-comics.com/comic/commut...
November 7, 2025 at 5:26 PM
A few days ago I watched a movie that was prefaced by the threat of fine and imprisonment if the copyrighted work was pirated. I guess we’ll see if under late-stage capitalism theft as exploitation is protected; I’m not optimistic that OpenAI will face justice.
OpenAI pirated large numbers of books and used them to train models.

OpenAI then deleted the dataset with the pirated books, and employees sent each other messages about doing so.

A lawsuit could now force the company to pay $150,000 per book, adding up to billions in damages.
November 4, 2025 at 4:02 PM
Reposted by Dr Brett H Meyer
The poor work more than the rich. This is so obvious it shouldn't need to be said, but American ideologies of meritocracy and bootstrapping obfuscate the realities of poverty. No one works harder than the poor.

It's also expensive to be poor, but that's another post for another day.
If you make 130% of the federal poverty level, you qualify for SNAP.

A family of four, making $40,560, qualifies for SNAP benefits meaning thousands of Missouri teachers qualify.

I’m tired of the bullshit. Most of the people who receive snap benefits work.
October 28, 2025 at 5:39 PM
Reposted by Dr Brett H Meyer
I am not a "tech critic". I am an antifascist, a feminist, an anticapitalist, an engineer. My criticism of tech flows from my politics and values. Not from a desire to save or destroy tech. Tech is an expression of power and that's what the whole conversation is about.
October 27, 2025 at 3:53 PM
As an AI researcher that is more often critical of AI than not, this, a million times this.
I don't actually hate AI, I just happen to be convinced that capitalism is gonna use it for some bad shit 🤷
October 27, 2025 at 6:59 PM
I don’t like how often I find myself having this conversation.
What if we did a single run and declared victory
October 23, 2025 at 2:53 AM
I’m grateful for researchers doing the hard work to study the consequences of AI-in-the-research-loop. In cognitive science:
Can AI simulations of human research participants advance cognitive science? In @cp-trendscognsci.bsky.social, @lmesseri.bsky.social & I analyze this vision. We show how “AI Surrogates” entrench practices that limit the generalizability of cognitive science while aspiring to do the opposite. 1/
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
www.sciencedirect.com
October 22, 2025 at 1:19 PM
Reposted by Dr Brett H Meyer
“The delegation of tasks to “tools & assistants” constitutes a methodological decision (…) Researchers should therefore be required to explain why they are trusting a black box that is neither open nor fair.”
@altibel.bsky.social & @petertarras.bsky.social

www.leidenmadtrics.nl/articles/why...
Why AI transparency is not enough
Recently, a taxonomy to disclose the use of generative AI (genAI) in research outputs was presented as an approach that creates transparency and thereby supports responsible genAI use. In this post we...
www.leidenmadtrics.nl
October 15, 2025 at 11:57 PM
Reposted by Dr Brett H Meyer
I’d rather pay for a thousand undocumented workers’ ER visits than a single missile fired at a Venezuelan fishing boat.
Leavitt: "When an illegal alien goes to the emergency room, who's paying for it? The American taxpayer."
October 3, 2025 at 6:26 PM
Reposted by Dr Brett H Meyer
I am seeing news that AI companies face “unexpected” obstacles in scaling up their AI systems.

Not unexpected at all, of course. Completely predictable from the Ingenia theorem.
The intractability proof (a.k.a. Ingenia theorem) implies that any attempts to scale up AI-by-Learning to situations of real-world, human-level complexity will consume an astronomical amount of resources (see Box 1 for an explanation). 13/n
May 17, 2025 at 8:48 AM
Don’t let anyone tell you integrating AI tooling is essential.

“Approximately half of [those] surveyed viewed colleagues who sent [AI] workslop as less creative, capable, and reliable than they did before ... Forty-two percent saw them as less trustworthy, and 37% saw [them] as less intelligent.”
A new study, based on a survey of 1,150 workers suggests that the injection of AI tools into the workplace has not resulted in a magic productivity boom and instead increased the amount of time that workers say they spend fixing low-quality AI-generated “work.”

🔗 www.404media.co/ai-workslop-...
AI ‘Workslop’ Is Killing Productivity and Making Workers Miserable
AI slop is taking over workplaces. Workers said that they thought of their colleagues who filed low-quality AI work as "less creative, capable, and reliable than they did before receiving the output."
www.404media.co
September 23, 2025 at 4:55 PM
In other words, the model reflects the biases of the societal context from which its training data emerges. Who could have predicted this?
The findings “suggest that medical AI tools powered by LLMs have a tendency to not reflect the severity of symptoms among female patients, while also displaying less ‘empathy’ towards Black and Asian ones.”
AI medical tools downplay symptoms in women and ethnic minorities
Large language models reflect biases that can lead to inferior healthcare advice to female, Black and Asian patients
www.ft.com
September 22, 2025 at 7:31 PM
And at the risk of it being screenshot and used against me, I’d recommend the same for Canadian professors.
The official White House social media has posted this.
The American Association of University Professors is advising their members to keep their social media private and to refrain from posting things that can be screenshot and used against them.

Welcome to distopia, have a horrible time.
September 18, 2025 at 1:13 PM
Reposted by Dr Brett H Meyer
at a time where folding for fascism and aligning with authoritarianism is the norm, distinguish yourself by having integrity and convictions and defending them
September 18, 2025 at 11:13 AM
“Fight, scorn, defy, and obstruct [the Supreme Court]” is not something I ever imagined I’d agree with, let alone repeat.
/8 Fight, scorn, defy, and obstruct them, because they are the unapologetic forces of ignorance, bigotry, and thuggery.
September 9, 2025 at 8:10 PM
“AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex.”
UK government report based on internal civil service trial finds Copilot doesn’t increase productivity, and indeed makes Excel take longer and with more errors, and requires Powerpoint users to have ‘corrective action’ applied to their outputs. www.theregister.com/2025/09/04/m...
M365 Copilot fails to up productivity in UK government trial
: AI tech shows promise writing emails or summarizing meetings. Don't bother with anything more complex
www.theregister.com
September 7, 2025 at 10:22 PM
Reposted by Dr Brett H Meyer
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
September 6, 2025 at 8:13 AM
Reposted by Dr Brett H Meyer
Likening PhD holders to a (non-functional) algorithm is a form of dehumanisation & anti-intellectualism that is a bellwether for contemporary fascism. Essentially it's a typical — if not the archetypal — first step towards fascism: to dehumanise, deskill, defund, and, ultimately, fire the academics.
What I would like to remind everyone talking about Sam Altman talking about the “PhD level intelligence” of the new ChatGPT is that Sam Altman dropped out of college so he… has no experiential construct for what grad school even is.
August 9, 2025 at 6:35 AM