Don’t forget to try our interactive widget on the project website. Test some of the encoding models in the paper and visualize brain predictivity right in your browser 🤗🧠
Don’t forget to try our interactive widget on the project website. Test some of the encoding models in the paper and visualize brain predictivity right in your browser 🤗🧠
A special thank you to Anya, my advisor, mentor, and constant source of encouragement. Your support means the world to me, and I’m so grateful to be learning from you
A special thank you to Anya, my advisor, mentor, and constant source of encouragement. Your support means the world to me, and I’m so grateful to be learning from you
LITcoder lowers barriers to reproducible, comparable encoding models and provides infrastructure for methodological rigor.
LITcoder lowers barriers to reproducible, comparable encoding models and provides infrastructure for methodological rigor.
🚩 Shuffled folds inflate scores due to autocorrelation
✅ Contiguous + trimmed folds give realistic benchmarks
⚠️ Head motion reliably reduces predictivity
🚩 Shuffled folds inflate scores due to autocorrelation
✅ Contiguous + trimmed folds give realistic benchmarks
⚠️ Head motion reliably reduces predictivity
1️⃣ Language models outperform baselines, embeddings, and speech models in predicting the language network
2️⃣ Larger models yield higher predictivity
3️⃣ Downsampling and FIR choices substantially shape results
1️⃣ Language models outperform baselines, embeddings, and speech models in predicting the language network
2️⃣ Larger models yield higher predictivity
3️⃣ Downsampling and FIR choices substantially shape results
1️⃣ Narratives
2️⃣ Little Prince
3️⃣ LeBel
Comparing features, regions, and temporal modeling strategies.
🛑 Currently, we support language stimuli
But the framework is extensible to other modalities(Video coming soon!)
1️⃣ Narratives
2️⃣ Little Prince
3️⃣ LeBel
Comparing features, regions, and temporal modeling strategies.
🛑 Currently, we support language stimuli
But the framework is extensible to other modalities(Video coming soon!)
1️⃣ AssemblyGenerator
2️⃣ FeatureExtractor
3️⃣ Downsampler
4️⃣ Mapping
1️⃣ AssemblyGenerator
2️⃣ FeatureExtractor
3️⃣ Downsampler
4️⃣ Mapping
Encoding models link AI representations to brain activity, but…
1. Pipelines are often ad hoc
2. Methodological choices vary
3. Results are hard to compare & reproduce
LITcoder fixes this with a general-purpose, modular backend.
Encoding models link AI representations to brain activity, but…
1. Pipelines are often ad hoc
2. Methodological choices vary
3. Results are hard to compare & reproduce
LITcoder fixes this with a general-purpose, modular backend.