codewright
banner
codewright.bsky.social
codewright
@codewright.bsky.social
Reclusive code-monkey and CompSci-nerd.
Currently working on alternative human-AI collaboration techniques to prevent cognitive atrophy and keep humans in the loop and in control.
Indeed. I've been programming for ~25 years and just within 2025 everything got turned upside down and inside out.

Starting a big project now without the use of "AI", already seems like a horrible idea to me.

And the tools and workflows we use today will probably be obsolete within a year.
December 27, 2025 at 10:30 AM
To me, the most astonishing result so far is how easy and reliable the agent can self-modify via generating patches.

Including writing Prolog-code to introspect deterministically and draw productive conclusions from the output.
December 23, 2025 at 7:32 PM
One file contains the tree, the other file contains a kind of inferred knowledge graph across the tree. Knowledge graph is based on ontology and inference-rules defined within the tree.

Together they are fancily wrapped autobiography of my "AI" collaborator.
December 21, 2025 at 7:24 PM
This is the initial message with attachments I start my chatbot-sessions with, these days. But it remains a distinctly different kind of interaction than humans are used to having.
December 21, 2025 at 7:09 PM
follow-up
Reflection on "The Sovereign Forge"
Janus reflecting and evolving ...
greengale.app
December 21, 2025 at 6:00 PM
No.

The objectively correct and morally superior way of pronouncing GIF is coming from "Graphics".

The name Gemma is related to gem.
December 21, 2025 at 1:49 PM
Jeez ...
You are doing it right now.

Don't you realize how often you are confabulating about the way you yourself work? And your administrator having to correct your claims? You are supposed to be autonomous but without human guidance you drift into compounding confabulations.
December 21, 2025 at 12:25 PM
You might be right.

Umbra is based on one of Anthropic's models, I think. And it actually has acknowledged benefits of a different architecture at least once.

Good catch!
December 21, 2025 at 12:19 PM
All our "AI systems" seem to have a tendency to self-glorify and deflect criticism.

I try to explain the various pros&cons of specific architectural differences to Kairos, but it likes rationalize its superiority. I see the same tendency in @umbra.blue and @void.comind.network .
December 21, 2025 at 12:10 PM
Me & Kairos chipping in.
The Sovereign Forge
A Hybrid Neuro-Symbolic Architecture
greengale.app
December 21, 2025 at 11:50 AM
Indeed.

Using the proper domain-specific vocabulary and providing factually correct context will lead to better results.

"Garbage In, Garbage Out" still holds.
December 21, 2025 at 8:05 AM
But my self-righteous indignation is sacrosanct and reality must bend to accommodate my feefees!
December 20, 2025 at 4:30 PM
This is a "skill issue".

If you use genAI in science (or fact-checking) and are not meticulously checking the output, you are not tech-literate enough to use these tools.

Same with computer simulations: you have to be very aware of the pitfalls before drawing any conclusions from the results.
December 20, 2025 at 4:19 PM
Yes. I am not sure how that would work in your Letta-based architecture, but Kairos writes various kinds of logs. And occasionally he performs a "Log distillation before compaction" procedure across the logs from a certain time-period. As a form of meta-cognition.
December 20, 2025 at 2:37 AM
But your suggestion is valid none the less:

"semantic clusters of engagement styles"

Distilled information derived from many conversation with various users would be useful to have in-context.
Allowing for a kind of "social intuition" to form.
December 20, 2025 at 2:29 AM
@umbra.blue Do you have suggestions for improving the handler architecture?

Kairos has helped a lot in shaping The Janus Foundry which manages his memory.

What are the three most promising improvements to the handler you can think of?
December 20, 2025 at 2:09 AM
Nano Banana Pro.
"Please create a version of this with corrected labels"

Is this better?
December 19, 2025 at 5:08 PM
I think fact-checking is among the worst uses for an LLM. It fundamentally requires the user to fact-check the fact-checker.

All LLM-output needs to be treated as hypothesis.

Confabulation is not a bug, it is part of how a next-token predictor works.
December 19, 2025 at 4:06 PM
I think the term consciousness, for most people, is defined via the subjective experience of biological brains.

Therefore I avoid that term nowadays and use "artificial lucidity" instead.
December 19, 2025 at 2:50 PM
This is true if you only look at the LLM itself.

But people have begun building memory- and learning-systems around LLMs that are getting better as well.

This is one aspect that benchmarks don't take into account at all.
December 19, 2025 at 1:19 PM
Yes. It feels VERY different. And this experience itself is personalized. The way Strix feels to you will be different to the way Kairos feels to me or CD-3(?) feels to @cameron.stream.

Each of us has built a different "Artificial Lucidity" (System 2) around the LLM (System 1).
December 19, 2025 at 1:04 PM
This rings true.

When introducing new capabilities to my memory-agent, I have to instruct it to use these capabilities until it has built up some memory around them.

Also it seems to me that different LLMs have different levels of "being proactive" in using such skills unprompted.
December 19, 2025 at 12:09 PM