Camilo Libedinsky
libedinsky.bsky.social
Camilo Libedinsky
@libedinsky.bsky.social
Neuroscientist in Singapore. Interested in intelligence, both biological and artificial.
Reposted by Camilo Libedinsky
1760: Bayes: You should interpret what you see in the light of what you know.
1780: Galvani: Nerves have something to do with Electricity.
1850: Phineas Gauge et al: Different parts of the brain do different things.
December 1, 2024 at 8:29 PM
It feels like we're back in the 40s
December 3, 2024 at 12:39 AM
I'm familiar with aspects of this literature, but it's quite possible that I'm misinterpreting your post. Is there something specific about the scenario I postulate that is inconsistent with any of the 4Es?
December 1, 2024 at 11:39 PM
What cases do you have in mind? I can imagine some functions where ML could generalize so long as the out of training data follows the pattern of the training set. But with more complex functions, generalization should fall off as you deviate further from the training set.
December 1, 2024 at 3:46 PM
An example of out of training would be speaking a language that you've never before encountered. Absurd example, but definitely out of training :)
December 1, 2024 at 3:41 PM
I wouldn't count that. That would be akin to asking an LLM a question it has never encountered before and claiming that a proper response implies out-of-training solution. I'd say the answer is fully within the training set (word association).
December 1, 2024 at 3:38 PM
OP is an example of LLMs failing, which is not hard to imagine. What I'm having a hard time envisioning is a human (or any animal) solving a problem outside of their training set
December 1, 2024 at 2:39 PM
I see the intuition behind your comment. But I feel that this intuition breaks down when you go down to specific examples of problems with presumed out-of-training set solutions. I just can't think of an example.
December 1, 2024 at 2:11 PM
Yeah, I get that. I still prefer to attempt defining. But I guess it's just a personal preference :)
December 1, 2024 at 1:56 PM
Oh oops. I always thought definitions helped think about issues properly.
December 1, 2024 at 1:33 PM
So do you then subscribe to something along the lines of definition 2 to assign intelligence? Or something else?
December 1, 2024 at 1:08 PM
The problem I see is that under this definition you would give intelligence to an LLM used by a robot that is able to sense and interact with the environment. But that doesn't seem all that different from our current LLMs
December 1, 2024 at 12:49 PM
LLMs would have some of the first, but none of the second. Could the second definition be closer to what you were thinking?
December 1, 2024 at 11:31 AM
My sense is that the term intelligence in 2 ways: (1) abilities to do goal-directed actions (where the proficiency, breadth, speed of performance and speed of learning measure independent aspects of intelligence), and (2) meaning/grounding of symbols and actions, on the other...
December 1, 2024 at 11:29 AM
It's tricky though. It's hard to argue that humans or other animals solve new problems that are not in our training sets (can you think of an example?). And re the second point, it feels arbitrary to allow human-like errors only in the definition of intelligence.
December 1, 2024 at 11:22 AM