M Ganesh Kumar
mgkumar138.bsky.social
M Ganesh Kumar
@mgkumar138.bsky.social
Computational Neuroscience, Reinforcement Learning. Postdoctoral Fellow @ Harvard. Previously @ A*STAR & NUS. 🇸🇬
Interestingly, we found no significant difference in under and over-updating behavior in Schizophrenia patient data (Nassar et al. 2021). Instead, analyzing the behavior using the delta area metric showed a significant difference, suggesting the utility of model-guided human-behavior data analysis.
March 27, 2025 at 5:12 PM
We used a fixed point finder algorithm and found that suboptimal agents (lower delta area value) exhibited smaller number of unstable fixed points compared to more optimal agents. The number of stable fixed points remained consistent across the delta area metric.
March 27, 2025 at 5:10 PM
Besides the (1) reward discount factor, we explored (2) prediction error scaling, (3) probability of disrupting RNN dynamics, (4) rollout buffer length. Each hyperparameter differently influenced the suboptimal decision making behavior, which we termed as delta area.
March 27, 2025 at 5:08 PM
Agents have to learn 2 solutions to predict changes in target location (change-point) and ignore outliers (oddballs). Decreasing the reward discount factor caused agents to under-update and over-update in each conditions respectively, replicating the maladaptive behavior seen in patients.
March 27, 2025 at 5:04 PM
I am speaking at COSYNE 2025. Please check out my talk if you're attending the event! #cosyne2025 #cosyne25
March 27, 2025 at 4:49 PM
Ablation studies in 1D and 2D environments show that inducing these biological representations improve policy convergence and facilitate learning new targets.
December 17, 2024 at 7:48 PM
Noisy updates in place field parameters drive neural drift while compensatory plasticity mechanism maintains stable navigation behavior.
December 17, 2024 at 7:47 PM
A reward maximization objective also causes place fields to grow in size and shift backwards towards the start location, suggesting the development of a reward predictive representation.
December 17, 2024 at 7:47 PM
Place fields rapidly move closer to the target location to increase reward representation. The reorganization dynamics is modulated by the value of a location.
December 17, 2024 at 7:46 PM