1760: Bayes: You should interpret what you see in the light of what you know. 1780: Galvani: Nerves have something to do with Electricity. 1850: Phineas Gauge et al: Different parts of the brain do different things.
December 1, 2024 at 8:29 PM
1760: Bayes: You should interpret what you see in the light of what you know. 1780: Galvani: Nerves have something to do with Electricity. 1850: Phineas Gauge et al: Different parts of the brain do different things.
I'm familiar with aspects of this literature, but it's quite possible that I'm misinterpreting your post. Is there something specific about the scenario I postulate that is inconsistent with any of the 4Es?
December 1, 2024 at 11:39 PM
I'm familiar with aspects of this literature, but it's quite possible that I'm misinterpreting your post. Is there something specific about the scenario I postulate that is inconsistent with any of the 4Es?
What cases do you have in mind? I can imagine some functions where ML could generalize so long as the out of training data follows the pattern of the training set. But with more complex functions, generalization should fall off as you deviate further from the training set.
December 1, 2024 at 3:46 PM
What cases do you have in mind? I can imagine some functions where ML could generalize so long as the out of training data follows the pattern of the training set. But with more complex functions, generalization should fall off as you deviate further from the training set.
I wouldn't count that. That would be akin to asking an LLM a question it has never encountered before and claiming that a proper response implies out-of-training solution. I'd say the answer is fully within the training set (word association).
December 1, 2024 at 3:38 PM
I wouldn't count that. That would be akin to asking an LLM a question it has never encountered before and claiming that a proper response implies out-of-training solution. I'd say the answer is fully within the training set (word association).
OP is an example of LLMs failing, which is not hard to imagine. What I'm having a hard time envisioning is a human (or any animal) solving a problem outside of their training set
December 1, 2024 at 2:39 PM
OP is an example of LLMs failing, which is not hard to imagine. What I'm having a hard time envisioning is a human (or any animal) solving a problem outside of their training set
I see the intuition behind your comment. But I feel that this intuition breaks down when you go down to specific examples of problems with presumed out-of-training set solutions. I just can't think of an example.
December 1, 2024 at 2:11 PM
I see the intuition behind your comment. But I feel that this intuition breaks down when you go down to specific examples of problems with presumed out-of-training set solutions. I just can't think of an example.
The problem I see is that under this definition you would give intelligence to an LLM used by a robot that is able to sense and interact with the environment. But that doesn't seem all that different from our current LLMs
December 1, 2024 at 12:49 PM
The problem I see is that under this definition you would give intelligence to an LLM used by a robot that is able to sense and interact with the environment. But that doesn't seem all that different from our current LLMs
My sense is that the term intelligence in 2 ways: (1) abilities to do goal-directed actions (where the proficiency, breadth, speed of performance and speed of learning measure independent aspects of intelligence), and (2) meaning/grounding of symbols and actions, on the other...
December 1, 2024 at 11:29 AM
My sense is that the term intelligence in 2 ways: (1) abilities to do goal-directed actions (where the proficiency, breadth, speed of performance and speed of learning measure independent aspects of intelligence), and (2) meaning/grounding of symbols and actions, on the other...
It's tricky though. It's hard to argue that humans or other animals solve new problems that are not in our training sets (can you think of an example?). And re the second point, it feels arbitrary to allow human-like errors only in the definition of intelligence.
December 1, 2024 at 11:22 AM
It's tricky though. It's hard to argue that humans or other animals solve new problems that are not in our training sets (can you think of an example?). And re the second point, it feels arbitrary to allow human-like errors only in the definition of intelligence.