We need better methods NOW, before digital twins become so convincing we stop asking how they work.
📚 Full ref: arxiv.org/abs/2509.17280
📄 doi.org/10.1016/j.neuron.2025.09.039
We need better methods NOW, before digital twins become so convincing we stop asking how they work.
📚 Full ref: arxiv.org/abs/2509.17280
📄 doi.org/10.1016/j.neuron.2025.09.039
- Grounding models in neuroscience/cognitive science theory
- Revealing computations through interpretability/XAI studies
- Generating testable hypotheses to drive experiments
Challenge: turning data-fitting machines into theory-bearing instruments.
- Grounding models in neuroscience/cognitive science theory
- Revealing computations through interpretability/XAI studies
- Generating testable hypotheses to drive experiments
Challenge: turning data-fitting machines into theory-bearing instruments.
Even with perfect predictions, we risk replacing one black box (the brain) with another (a deep neural network).
Explanatory value requires more than fit.
Even with perfect predictions, we risk replacing one black box (the brain) with another (a deep neural network).
Explanatory value requires more than fit.
If a model predicts like a brain, has it discovered how the brain works?
Tempting to think so.
If a model predicts like a brain, has it discovered how the brain works?
Tempting to think so.