(@isntitvacant elsewhere)
(Though this could change. If we get to a place where we put the LLM on local hardware and get it to produce acceptable output deterministically, that concern goes away)
(Though this could change. If we get to a place where we put the LLM on local hardware and get it to produce acceptable output deterministically, that concern goes away)
For right now the lack of determinism in LLM behavior & output means the details of the output PL behavior peeks through.
For right now the lack of determinism in LLM behavior & output means the details of the output PL behavior peeks through.
It feels odd to generate human-oriented text just for machine consumption (but, as you know, it's one of CS's most cherished traditions, along with considering things harmful)
It feels odd to generate human-oriented text just for machine consumption (but, as you know, it's one of CS's most cherished traditions, along with considering things harmful)
(So "offloading" the mental model of a system to an agent is an easy-but-hazardous pattern)
(So "offloading" the mental model of a system to an agent is an easy-but-hazardous pattern)
the agent isn’t a domain expert; it can’t be. it reinvents its understanding of the code from scratch every session
the agent isn’t a domain expert; it can’t be. it reinvents its understanding of the code from scratch every session