Artem Andreenko
@dustyentropy.bsky.social
39 followers 37 following 67 posts
Building AGI @ SentientWave This is my personal blog about computation, communication, and energy. https://artemandreenko.com
Posts Media Videos Starter Packs
🚀 Introducing CanvasCraft: a 2D drawing library for Elixir, powered by Skia + Rustler. Declarative Scene DSL → charts, dashboards, graphics with high-perf rendering.

github.com/miolini/canv...
Nothing puts a modern CPU workstation through a tougher workout than building the Android Open Source Project from scratch. It has to churn through roughly 700 GB of source code.
The hourly operating cost of a humanoid robot is fundamentally not very different from that of an electric bike.
Computer chips still lag far behind the human brain in voltage efficiency. Human brain spikes reach about 100 mV, while CPUs chips often run at 1.0V-1.5V or higher over 10 times the voltage of a neuron spike. Imagine the efficiency if chips could work at brain-level voltages.
Life is a self-modifying computational phase of matter.
I'm sharing my personal productivity tool, WebAI, which lets you interact with any currently open page using AI and even listen to it in Chrome. It runs locally using deployed Ollama and Kokoro services. Please watch the demo with audio.

Code:
github.com/miolini/webai
Every AGI agent must have access to at least one Kubernetes cluster for running it's tools, dev environments, model inference/fine‑tuning, self-host source code, CI/CD pipelines, monitor “digital body”. Be dynamically reconfigurable.
Quantum superposition can be thought of as a form of lazy computation because it allows for the deferment of computations until they are actually needed.
Hosting a podcast for 3 years taught me that explaining ideas out loud greatly deepens your own understanding. This method could be a powerful tool for improving synthetic datasets and training AI models.
I'm happy to share my research on designing and measuring the resilience of distributed systems from computer infrastructure to large-scale swarms of Von Neumann probes and beyond.

"Calculus of Distributed Persistence"
sentientwave.com/cdp
The terror of the ontological instability was overwhelmed by a profound, unexpected beauty, the perfect, cascading symmetry of worlds within worlds, a dizzying architectural elegance suggesting that their entire existence was merely one facet of a vast, recursive jewel.
For a dizzying second, she saw another tear in that sky, revealing yet another layer, and another, each vista nested within the last like an infinite matryoshka doll.
When the sky-glitch happened, tearing reality like wet paper over Neo-Kyoto, Anya didn't see code or error messages. Instead, through the ephemeral rip, she glimpsed another city, impossibly familiar yet subtly different, bathed in an alien light.
Because if we can see ourselves differently, we can build differently. And that changes everything.
Maybe it’s time to revisit ideas like the Lisp Machines, not with nostalgia, but with new eyes. With AI as a partner, not just a feature. We don’t need more apps. We need a renaissance.
What if we reimagined the system itself? A machine not built to be replaced every two years, but one that evolves with you. Learns with you. Becomes a true extension of your mind. A tool so seamless, so alive, that it becomes a masterpiece, a living artifact of human creativity.
But now, with intelligence, real intelligence becoming abundant, we have a chance. A rare moment to pause, reflect, and ask ourselves: Did we take the right path? And if not, why not go back and start again, but this time, with vision?
But despite all the breathtaking advances, GPUs that rival supercomputers, lightning-fast memory, flash storage, fiber optic communication, we’ve used these miracles to mask the ugliness beneath. The bloat. The complexity. The compromise.
You know, I’ve been thinking… Somewhere along the way, the tech industry made a wrong turn. Maybe it was the pressure of quarterly earnings, maybe it was the obsession with scale over soul.
The latest news once again confirms that AI is not an enemy of humanity but a solution that enables recovery despite challenges and in a more reliable way.
While human brain activity can be described as a computation, this isn't the full picture. Biological reproduction itself can similarly be described as a computation.
Transformer model distillation has serious disadvantages behind large teacher models: smaller layer numbers. It makes it harder to learn the complexity of long neural circuits evolved in teacher models. Deepness is very important for hard tasks like precise reasoning.
With CXL, we may eventually be able to connect RAM as an external PCI-E dongle via USB/Thunderbolt. Imagine a Raspberry Pi with 1TB of RAM.