M. C. Flux PhD
@fluxinflux.bsky.social
560 followers 21 following 120 posts
Neuroscience and Clinical Psychology PhD Educator and researcher. My research explores, equity, novel therapeutics, and human emotion. I’m passionate about communication and using information to empower communities. Proudly queer and neurodiverse
Posts Media Videos Starter Packs
fluxinflux.bsky.social
As a scientist, I can attest to this!
This image shows the title and byline of a “Futurism” article. The title is: “The More Scientists Work with AI, the Less They Trust It.” They byline: “Numbers are down across the board. By Joe Wilkins / Published Oct 13, 2025. It shows an illustration of a scientist resting his chin on his fists staring at a laptop in dismay. 

Alt text written by BlueSky user @fluxinflux, not AI.
fluxinflux.bsky.social
The thing that gets me is that there isn’t even any worry. It’s just “we created this pathway to easy access heroin. Our heroin is the safest, but soon there will be a lot of unsafe heroin. Good luck!”
fluxinflux.bsky.social
The world works this way because of you, Mr. Altman. You created a world of runaway digital body snatching and SpongeBob meth labs. Don’t blame human nature to remove your culpability.

Quote from today’s article in @theverge.com

www.theverge.com/ai-artificia...
He positioned the launch's speed bumps as learning opportunities. "Not for much longer will we have the only good video model out there, and there's going to be a ton of videos with none of our safeguards, and that's fine, that's the way the world works," Altman said, adding, "We can use this window to get society to really understand,
'Hey, the playing field changed, we can generate almost indistinguishable video in some cases now, and you've got to be ready for that."
fluxinflux.bsky.social
I love that reference!
fluxinflux.bsky.social
I very much think we are on the same page. Which is why I wanted to engage! These are really complex issues that often get lost when we try to make short slogans to summarize them.

As an educator, I often struggle with how to stay precise AND concise. I’m still learning.
fluxinflux.bsky.social
I chose a shorthand way of saying that which does lose some accuracy, but it was more pithy for a Bluesky post.
fluxinflux.bsky.social
The more nuanced point here is that when we reject replicable observations and decisions made on empirical evidence in favor of a stance that we prefer but is unsupported by those observations, then the system of science is no longer self-correcting in the way I discuss in the video.
fluxinflux.bsky.social
Approaches to ethics are themselves ideological. Even the distinction we draw between subjectivity and objectivity can be seen as the consequence of an ideology.

But we tend to take the stance that if an observation is replicable, it stands on its own in some particular way.
fluxinflux.bsky.social
Oh definitely. all of this is a broader ideological stance on what science is and its role in society.

This video also didn’t really have a lot to do with that point, it’s the conclusion of one of my general psychology lectures on research ethics. But I’ve been thinking about this lately.
fluxinflux.bsky.social
I think I would refer to this more as an aphorism, but your point is taken.

Empiricism itself can be seen as an ideology, which betrays the challenges of language when discussing these issues. Accurate descriptions of this issue take far more words than the character limits in BlueSky.
fluxinflux.bsky.social
Science is self-correcting, but only if our goal is accuracy over ideology.
fluxinflux.bsky.social
The biggest AI problem right now isn’t a Skynet scenario (yet), it’s the concentration of power and lack of oversight or public input. “Your hairdresser has… more regulation…”

This is from @theatlantic.com’s recent article “The AI Doomers are getting Doomier” by @matteowong.bsky.social
fluxinflux.bsky.social
Self-awareness isn’t even the biggest issue. I teach the computational neuroscience of consciousness as a consequence of self-modeling to facilitate homeostasis. The human brain evolved these systems to manage embodied existence. Modern AI doesn’t need this. The potential threats are weirder.
fluxinflux.bsky.social
The stupidity fools us. We are not dealing with Skynet, yes, but what we are dealing with holds dangers that are challenging to even imagine.

This is something we discuss a lot in my AI reading group (why I was reading that paper.) The computational architecture of our AI systems makes this weird.
fluxinflux.bsky.social
Seriously. This is all becoming far too real.
fluxinflux.bsky.social
Currently reading an article about the impact of AI hype. There’s so much here, but the impact of AI investment on the balance of military investment and power is the most damning.

(Highlights my own)

Paper: dl.acm.org/doi/full/10....
A screenshot from an academic paper about the impact of AI hype that is highlighted. It reads:

We see this dynamic at play in OpenAI's shifting stance and policies as they have become more closely tied to Microsoft, a choice they made admittedly to access scarce computational resources. This is illustrated by the 2024 change to their acceptable use policy, which significantly softened the proscription against using their products for "military and warfare"; this change unlocked new revenue streams [112], connected OpenAI's large scale AI to Microsoft's existing US military partnerships [135], and later pushed AI to direct battlefield application via a partnership with the defense-tech company Anduril [85]. The public did not have a say in this determi-nation, nor, we assume, did the global stakeholders whose interests may not be safeguarded by the militaries that OpenAl chooses to provide services to. And yet, the concentrated private industry power over Al creates a small, and financially incentivized segment of AI decision makers. We should consider how such concentrated power with agency over centralized Al could shape society under more authoritarian conditions.
fluxinflux.bsky.social
Thanks for the tag though!
fluxinflux.bsky.social
This is my field! But I am not looking for a postdoc position 😅
fluxinflux.bsky.social
I guess I’ve just cultivated my appearance to look surreal? Feel free to check me out! I am very much a real person 😅
fluxinflux.bsky.social
Hahaha it’s doing its job then!
fluxinflux.bsky.social
Fair! This is a clip from a lecture for my general psychology class where we discuss different biases. So it’s new for many of them.