Would've been miserably hard in the past without coding agents. New methods are emerging!
Would've been miserably hard in the past without coding agents. New methods are emerging!
The machine built by feeding it decades of data and news stories is like "shut the fuck up, that shit can't be true."
Immune to frog boiling.
The machine built by feeding it decades of data and news stories is like "shut the fuck up, that shit can't be true."
Immune to frog boiling.
AI Samuel Adams: "I have some thoughts on which method of hanging is best."
AI Samuel Adams: "I have some thoughts on which method of hanging is best."
But can you articulate those priors -in all their n-dimensional hypervolumetric glory- to an llm?
And do you have the metacognition to understand how to communicate it to a human-like-but-distinctly-different llm so it translates correctly?
Do you know how you know things? How you know to debug? To convey that to an agent?
It's explicating tacit knowledge.
But can you articulate those priors -in all their n-dimensional hypervolumetric glory- to an llm?
And do you have the metacognition to understand how to communicate it to a human-like-but-distinctly-different llm so it translates correctly?
x.com/MattLutzPhi/...
x.com/MattLutzPhi/...
However, you aren't going to make the AI suddenly better through role-play.
Paper: papers.ssrn.com/sol3/papers....
However, you aren't going to make the AI suddenly better through role-play.
Paper: papers.ssrn.com/sol3/papers....
We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.
We found that telling the AI "you are a great physicist" doesn't make it significantly more accurate at answering physics questions, nor does "you are a lawyer" make it worse.
But can you articulate those priors -in all their n-dimensional hypervolumetric glory- to an llm?
And do you have the metacognition to understand how to communicate it to a human-like-but-distinctly-different llm so it translates correctly?
Do you know how you know things? How you know to debug? To convey that to an agent?
It's explicating tacit knowledge.
But can you articulate those priors -in all their n-dimensional hypervolumetric glory- to an llm?
And do you have the metacognition to understand how to communicate it to a human-like-but-distinctly-different llm so it translates correctly?
You don't want to have (0,1) skill developers and hand them these tools. You just can't afford it. Bad devs are now worse than ever while great ones spin gold on four worktrees simultaneously.
in this installment we try to figure out how to at least name the phenomenon where people turn into slop conjurers
You don't want to have (0,1) skill developers and hand them these tools. You just can't afford it. Bad devs are now worse than ever while great ones spin gold on four worktrees simultaneously.
friend: That's a really perceptive question - I think it gets at the heart of the current problem with LLMs! If you'd like, I can break down the origins and implications of this particular problem
(photos via Getty and by Aaron Schwartz and Paul Morigi)