Abel_TM
banner
abeltm.bsky.social
Abel_TM
@abeltm.bsky.social
Research Scientist. Implementing reasoning in AI. Theory and implementation of open ended reasoning algorithms for long term planning, robotics, math, protein design and science
I know I don't have a literary mind, don't have the abilities of mixing colors in a canvas... Equally, there are people -good at many other stuff- but lacking the rigorous scientific mindset and education. These theories of consciousness are full of holes, that ship sinks in the waters of science
April 16, 2025 at 9:46 PM
Indeed, there is enough knowledge as to make significant progress but it is hampered by:

- Specific people and groups with agendas that benefit from the status quo

- Collecting most of the talent in few companies with restricted research scope

- The lack of scientific rigor and critical thinking
January 5, 2025 at 3:02 PM
You can still use the tool if you carefully review the output (or use a third party verifier), even if you ignore completely how the tool works. In this case it is used mostly as some probabilistic idea generator.
December 30, 2024 at 2:15 PM
If the question translates into: can we use a tool for deriving valid conclusions if we ignore its scope of applicability?
The answer there would be NO.
Eg, current AI doesn't have guarantees of correctness as to assume that 20 pages of math manipulations will not include a mistake
December 30, 2024 at 2:15 PM
We must bear in mind that while we are forecasting superhuman intelligence, current systems had not shown capabilities on asking questions or formulating hypothesis. Somehow, some people think this comes along with better problem solving and aren't architectural requirements
December 18, 2024 at 9:46 AM
I mean the way the connectivity is configured. Eg current architectures don't allow for arbitrary amount of reasoning steps (open-endedness).

Same for the lack of robust reasoning: it should be part of the architectural design and not be expected being consistently "discovered" during training
December 17, 2024 at 8:11 PM
We must be clear that current systems represent a very specific architecture of ANNs by design.

Even if we could abstract real neurons with artificial ones, the essence of system's dynamics relies on the architecture, which is radically different in current ANNs compared to the brain
December 17, 2024 at 7:46 AM
Saying that humans are not a form of general intelligence isn't about putting an isolated human in a test tube; it is asserting that there are subjects in which humanity can't make progress given enough time and technology. Are there such areas? cc @ylecun.bsky.social
9/9
December 12, 2024 at 1:46 PM
Some, mistakenly, expect that such capability must be encapsulated in *single* intelligent agents, but ‘general intelligence’ always relies on three pillars, it must be: social, generational and technological.
8/
December 12, 2024 at 1:46 PM
- Intelligence is a collection of pattern manipulation mechanisms and detecting similar patterns in dissimilar environments is what we call ‘generalizations’. Unbounded mechanisms of pattern-finding constitute ‘general’ intelligence.
7/
December 12, 2024 at 1:46 PM
Current systems are no replacement for scientific research since the topic of problem formulation is not even on the table right now. A theory-less science is as weak as like hypothesis-lacking experiments
6/
December 12, 2024 at 1:46 PM
- I am convinced that efficient intelligent systems (comparable to biological ones) will come from robust models of cognition. Several people (Chollet @fchollet.bsky.social‬, Y. LeCun, me, others) are working on this direction and sooner than later we’ll see some prototypes of these projects
5/
December 12, 2024 at 1:46 PM
The dystopian perspective is stronger in some places more than others. One thing should be clear: we should not expect technology to come rescuing us if our values are the ones in disarray and misaligned with respect to our own interests
4/
December 12, 2024 at 1:46 PM
Third, the fundamentally nonaligned rogue ASI that treats humans like we treat other species is a deep moral and ethical question. What is the relation between economic and technological progress and human values?
3/
December 12, 2024 at 1:46 PM
- Would ASI decide to kill all humans? Clearly, any advanced AI will –correctly- conclude that many of our critical problems are of our own making, but as rightly pointed out, this realization is much more complex than noticing that removing humans is not a solution for humans
2/
December 12, 2024 at 1:46 PM
I would put a like but was stopped by the perfect number...
December 10, 2024 at 9:03 AM
A solid one. In the first 15 min I realized that my anti AI-hype view was aligned with your position. Further, I like your contrarian view on the standard AI dogma.
December 5, 2024 at 3:56 PM
Exactly. That is what I am proposing in my framework that will demo shortly (in MiniGrid as start): set of rules (action space) can be modified ad hoc and the system adapts with robust reasoning to new conditions
December 5, 2024 at 10:43 AM
Eg: "Find a possible sequence of movements from the start of a game of chess that leads to white pieces delivering checkmate in four moves. Only knights and pawns can be moved"

- GPT(4o, o1-mini, o1-preview): Impossible
- Gemini-1.5-Pro-002: 1. Nf3 Nf6 2. Ng1 Ng8 3. f4 e5 4. g4 h5# ???
- Claude:
December 5, 2024 at 10:31 AM
In Flexibility the system needs to adapt online to new conditions and not only rely only on pretraining (eg a broken leg in a multilegged robot) or kids quickly learning to play 2x2 chess, exchanging pieces.
December 5, 2024 at 10:31 AM
In Accuracy we need correct state to state transitions, I see that in your work ‘hallucinations’ are reduced to less than 0.1%.

The challenge is that a single invalid transition (eg in theorem proving) renders the whole output invalid
December 5, 2024 at 10:31 AM
Interesting results on reasoning potential with LLMs. I use regularly chess to test reasoning abilities and they usually ‘hallucinate’ invalid moves and positions.

From my work on general reasoning agents I see two main required properties: accuracy and flexibility.
December 5, 2024 at 10:31 AM
Still, if some future architectures require few to none tuning for a new task would seem weird assigning all the credit to the designers of the general architecture.

Highly autonomous and self learning AIs could be creative, discover new things and be just tools
December 4, 2024 at 10:45 PM
Agree, I also see AI as a tool. Nowadays, we see a lot of attributions of results to the AI when actually it required lots of architecture design, data selection and finetuning.
December 4, 2024 at 10:45 PM