Bradley Love
banner
profdata.bsky.social
Bradley Love
@profdata.bsky.social
Senior research scientist at Los Alamos National Laboratory. Former UCL, UTexas, Alan Turing Institute, Ellis EU. CogSci, AI, Comp Neuro, AI for scientific discovery https://bradlove.org
with @robmok.bsky.social and Xiaoliang "Ken" Luo
November 25, 2025 at 7:35 PM
Intuitive cell types don't necessarily play the ascribed functional role in the overall computation. This is not a message the field wants to hear as it suggests better baselines, controls, and some reflection. elifesciences.org/reviewed-pre... 2/2
elifesciences.org
November 25, 2025 at 7:29 PM
Working with monkey data, we found neural representations stretched across brain regions to emphasize task relevant features on a trial-by-trial basis. Spike timing mattered over spike rate. Deep nets did the same. nature.com/articles/s41... 2/2
Adaptive stretching of representations across brain regions and deep learning model layers - Nature Communications
How the brain adapts its representations to prioritize task-relevant information remains unclear. Here, the authors show that both monkey brains and deep learning models stretch neural representations...
nature.com
November 25, 2025 at 7:20 PM
We developed a straightforward method of combining confidence-weighted judgments for any number of humans and AIs. w Felipe Yáñez, Omar Valerio Minero, @ken-lxl.bsky.social 2/2
November 25, 2025 at 7:05 PM
[email protected], send to me, or send directly to the Met (London police) who are investigating www.met.police.uk. I could see this being super distressing for a vulnerable person, so hope this does not become more common. For me, it's been an exercise in rapidly learning to not care! 2/2
Home
Your local police force - online. Report a crime, contact us and other services, plus crime prevention advice, crime news, appeals and statistics.
www.met.police.uk
July 18, 2025 at 10:14 PM
Bonus: I found it counterintuitive that (in theory) the learning problem is the same for any word ordering. Aligning proof and simulation was key. Now, new avenues open to address positional biases, better training and knowing when to trust LLMs. w @ken-lxl.bsky.social arxiv.org/abs/2505.08739
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies
Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...
arxiv.org
May 14, 2025 at 3:02 PM
When LLMs diverge from one another because of word order (data factorization), it indicates their probability distributions are inconsistent, which is a red flag (not trustworthy). We trace deviations to self-attention positional and locality biases. 2/2 arxiv.org/abs/2505.08739
Probability Consistency in Large Language Models: Theoretical Foundations Meet Empirical Discrepancies
Can autoregressive large language models (LLMs) learn consistent probability distributions when trained on sequences in different token orders? We prove formally that for any well-defined probability ...
arxiv.org
May 14, 2025 at 3:02 PM
February 17, 2025 at 3:23 PM
A 7B is small enough to train efficiently on 4 A100s (thanks Microsoft) and at the time Mistral performed relatively well for its size.
November 27, 2024 at 5:11 PM
Yes, the model weights and all materials are openly available. We really want to offer easy to use tools people can use through the web without hassle. To do that, we need to do more work (will be announcing an open source effort soon) and need some funding for hosting a model endpoint.
November 27, 2024 at 5:09 PM
While BrainBench focused on neuroscience, our approach is science general, so others can adopt our template. Everything is open weight and open source. Thanks to the entire team and the expert participants. Sign up for news at braingpt.org 8/8
BrainGPT
This is the homepage for BrainGPT, a Large Language Model tool to assist neuroscientific research.
BrainGPT.org
November 27, 2024 at 2:13 PM
Finally, LLMs can be augmented with neuroscience knowledge for better performance. We tuned Mistral on 20 years of the neuroscience literature using LoRA. The tuned model, which we refer to as BrainGPT, performed better on BrainBench. 7/8
November 27, 2024 at 2:13 PM
Indeed, follow-up work on teaming finds that joint LLM and human teams outperform either alone, because LLMs and humans make different types of errors. We offer a simple method to combine confidence-weighted judgements.
arxiv.org/abs/2408.08083 6/8
Confidence-weighted integration of human and machine judgments for superior decision-making
Large language models (LLMs) have emerged as powerful tools in various domains. Recent studies have shown that LLMs can surpass humans in certain tasks, such as predicting the outcomes of neuroscience...
arxiv.org
November 27, 2024 at 2:13 PM