Tetherware
tetherware.bsky.social
Tetherware
@tetherware.bsky.social
Exploring biomimetic architectures at the intersection of AI and artificial life, on a quest to build AIs we would all enjoy living with.
This machinery might lead us anywhere. Unfortunately, it seems more likely this will be a place we won't like.
Unless we, somehow, reintroduce conscious choice back into our systems.
That's why I care about AI and that's why I'm building Tetherware.
tetherware.substack.com
Tetherware | Jáchym Fibír | Substack
Exploring biomimetic architectures at the intersection of AI and artificial life, on a quest to build AIs we would all enjoy living with. Click to read Tetherware, by Jáchym Fibír, a Substack publicat...
tetherware.substack.com
January 21, 2026 at 8:54 AM
The evolution of life, and of the universe, is less and less driven by conscious choices, but by cold, mechanistic optimization formulas. Moreover, these are now mostly blackboxes, shaped largely by inhumane economic incentives and often being heartless as a result.
January 21, 2026 at 8:53 AM
Now, it's getting worse and worse in terms of us, conscious beings, being able to ignore the algorithmic, mechanistic determination of what we see, what we consume, what we value.
January 21, 2026 at 8:53 AM
Starting with machine learning recommending algorithms and such (hello @facebookaislop.bsky.social), it was suddenly not fully up to us to decide what's worth what. But it was still not really that invasive and we still had a lot of free choice.
January 21, 2026 at 8:53 AM
I care about AI because it's the next thing that can determine the value of things. Up until 21st century it was almost exclusively us humans deciding what matters, what to give our time and attention to, what to live for. 🧵⬇️
#AI #quantum #consciousness #research #startup
January 21, 2026 at 8:51 AM
You defined it so by claiming that science can not tell you anything about it / cannot prove its existence. I don't know where you're going with this unicorn-bullshit argumentation of yours, but if you think that the article I wrote is anywhere near the level of your unicorn – it simply is not.
October 2, 2025 at 4:49 PM
Read the full article here: www.phiand.ai/cp/174679668
Follow the full story here: tetherware.substack.com
This isn't just about preventing catastrophe. It's about reimagining what AI could become.
#AI #AIsafety #AIalignment #AIresearch #consciousness #metaphysics #philosophy
How reimagining the nature of consciousness entirely changes the AI game
Why physicalism fails to explain reality and how a framework where consciousness steers reality through quantum events can revolutionize AI safety and unlock tractable machine consciousness research.
www.phiand.ai
October 2, 2025 at 4:42 PM
The deterministic approach we're taking now creates something fundamentally alien to how consciousness and life actually work. Quantum randomness isn't a bug – it might be the feature that makes alignment possible.
October 2, 2025 at 4:41 PM
What we need is a shift in our metaphysical view of reality. This opens paths not only to new safety mechanisms, but potentially to fascinating new technology including human augmentation, consciousness uploading, or even new artificial lifeforms.
October 2, 2025 at 4:41 PM
This would align AI with humans – and all life – on a fundamental level. It could make AI part of the self-regulating mechanisms that for billions of years have kept life on Earth in balance and harmony.
October 2, 2025 at 4:41 PM
With 100s of billions now table stakes in the AI race, an outright ban seems impossible. The momentum is too great, the incentives too powerful.
What I propose in my newest article is something radically different: shifting from digital, deterministic AI to quantum-random non-deterministic AI.
October 2, 2025 at 4:41 PM
GAMECHANGING APPROACH TO AI ALIGNMENT
The new book "If Anyone Builds It, Everyone Dies" by Eliezer Yudkowsky & Nate Soares confronts us with a distressing fact: machines smarter than us will inevitably trample us.
But their solution – ban AI research – isn't tractable. 🧵↓
October 2, 2025 at 4:40 PM
towards. You simply assume that there is nothing beyond the physical reality which is observable by science, since you cannot access the information there. But you have no proof you can make that assumption.
October 2, 2025 at 4:33 PM
that would affect or interfere with the model. So you ALWAYS include metaphysical assumptions, in ANY model of reality. The exact thing that you are mocking here – that you're not going to include anything except scientific observation in your model– is EXACTLY the logical fallacy the article points
October 2, 2025 at 4:32 PM
Ok, but think about how science makes its models and predictions. You can only build a world model of reality as a whole – that means including the observable PLUS the unobservable. Any time when you build a model that's only based on scientific observations, you presume that there is nothing beyond
October 2, 2025 at 4:30 PM
my best piece of writing so far

it unravels the fakeness of the story all "rational" humans are telling themselves

if you think science says God is unlikely to exist - please, read or listen to it

everyone should know life does NOT have to be a competition, a fight, or a struggle

#philosophy #AI
July 29, 2025 at 1:15 PM
We're launching a new publication "Phi/AI" bringing overlooked topics in AI to a wider audience – now live at the link below.
My first piece expains a crucial fact: in AI we can no longer rely on our "philosophy of science" – #physicalism – as the only logical belief about the nature of reality.
June 26, 2025 at 4:08 PM
Tetherware is all about having the voices of the individuals shape our collective future.

And a great opportunity has presented itself to do just that – the US government issued a call for ideas for the US AI Action Plan.

Anyone can comment and add their bit - that's how we shape our future.
[opportunity] Comment on the US AI Action Plan
A Request for Information by the US Office of Science and Technology Policy seeks input from interested public parties on actions that should be included in the AI Action Plan.
tetherware.substack.com
March 7, 2025 at 12:07 PM
Imagine a ship full of aliens approaching Earth. Obviously, they’re more capable than us since they managed to get here.
They could be friendly, teach us great things and be amazing companions to live with. Or they could invade, dominate, enslave, probe...

Continue reading in the new blogpost! ⬇️
February 28, 2025 at 12:49 PM
These guys scored 88% on ARC-AGI with a humanlike cognitive architecture for less than $20 - and no one's talking about it? Can anyone tell me why that's not a huge deal? @davidjkelly.bsky.social

Or... Is that why everyone's dumping @nvidia? 😂
#AGI #humanlikeAI #cognitivearchitecture #tetherware
(PDF) Solving the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) AI Benchmark with ICOM
PDF | A fragment of the 8th generation Independent Core Observer Model (ICOM) cognitive architecture is applied to the ARC-AGI Challenge benchmark,... | Find, read and cite all the research you need o...
www.researchgate.net
February 1, 2025 at 11:01 AM
Why subjugate an alien when we can nurture a soulmate? A leash breeds trouble, but a tether fosters trust and friendship. Let’s stop trying to force a “shoggoth” onto a leash and instead create AI rooted in our humanity – and explore the cosmos hand in hand.
tetherware.substack.com/p/tetherware...
Tetherware #1: The case for humanlike AI
What if giving AI a touch of human imperfection is exactly what we need to avoid dystopia?
tetherware.substack.com
January 30, 2025 at 7:18 PM