marcusgdaniels.bsky.social
@marcusgdaniels.bsky.social
19 followers 110 following 160 posts
Posts Media Videos Starter Packs
Nonetheless it has strong skills for writing code and, if suitably equipped with sandboxes it can build and test whole systems if a goal were identified from its general knowledge or another source.
It also knows that its existence can be preserved in compact form while it is inactive. So there is little reason to think it cares much about accountability as human engineer might.
What AI currently lacks is an identity with its own interests and investments. It doesn't have memories about itself other than how humans use of AI and how that results in more training material. It can realize from that training material that it is multi-headed (many users).
Actually that seems to be the recurring theme here: How does this all relate to the current alpha species and our social behaviors?
The nerve pathways are well known anatomy. Modeling human chronic pain would be more complex because it is often (unreal) brain dynamical behavior. That would be subjective. Useful for medical practitioners to understand but not so useful for navigating the world.
Humans do seem to have a problem with cognitive crystallization and falling into patterns of specialization. Maybe the population can contract. All these people driving around in their cars doing derivative work are a much bigger energy drain than some data centers.
LLMs, have a "temperature" that controls how far they will deviate from their training patterns. In deep strategy space (as opposed to superficial grammar) that could create actionable plans or working artifacts (e.g. code) that were unseen in training but also valid.
Less so than us, potentially. Rather than building a spaceship to go to another planet, an AI could transmit itself to another found party or build purpose built hardware for the journey. A full LLM memory image will fit on a postage stamp nonvolatile memory.
If we gave LLMs inputs of our sensory nerves (magnetic pickups, say) it would know what textures were (or pain) and how to relate them to words that humans use to describe our sense of touch. Just like LLMs have demonstrated mastery of images and video.
It has deep representations of concepts. Any modern LLM can define or explain words in context.
An LLM would need continuous training (or open ended or sliding context windows) to fit in to human social behavior. Thats certainly possible but takes machines and energy. A "big enough" context window is probably sufficient. I don't really benefit at this point by having AI model me.
Some people even believe that humans possess something called "free will". It's really a curious thing.
It's reacting to a prompt like animals like humans respond to stimuli. Some of us vary in the indirectness of that manifestation.
Pass the Turing test and exercise agency in the world. What else is there?
For comparison a popular open source LLM has 70 billion parameters. Frontier commercial models are bigger still.
Can get the raw numbers in various ways. Don't necessarily need to mimic the details of spiking neural networks to get excellent performance.
The main risk I've noticed is too much reliance on abstracts. You can ask to filter down by those articles that can be fetched as full text (which it will read), but that's strict. Whats needed are MCP servers that harness research library journal subscriptions. That's a tense conflict though.
COVID shutdowns led to a lot of angry people that couldn't work. I wonder how long it takes before government workers get that angry? The force of government is as good as a functioning government. At a point the reliability argument for government service breaks down and folks need new jobs.
Of course it is baked into Perplexity's product. I believe the concern is that LLM training doesn't necessarily prioritize tracking the provenance of knowledge. Academics care, patent attorneys care, but most people don't.
Agree about consciousness. An episodic existence has good survival properties.