Lorenzo Bertolini
banner
lorenzoscottb.bsky.social
Lorenzo Bertolini
@lorenzoscottb.bsky.social
Researcher at the European Commission Joint Research Centre (JRC). Multimodality & interpretability in AI for health. Occasionally, AI for dream research. All views are my own
https://lorenzoscottb.github.io
I’ll be at #ACL2025NLP next week, with a paper for the #GeBNLP workshop!

In this work, we tested the ability of a task-level explainability method to trace biological-sex biases in medical image classification, using general and biomedical VLMs.

@aclmeeting.bsky.social @genderbiasnlp.bsky.social
July 25, 2025 at 4:39 PM
Introducing PreDA (prefix-based dream annotation), a set of generative LLMs tuned to annotate dream reports for 6 Hall & Van De Castle features! The paper will be presented at LOD 25 conference later this year as a long contribution. Models available here huggingface.co/jrc-ai
#sleeppeeps #dream
May 15, 2025 at 3:46 PM
👋 #SleepPeeps, happy to share DReAMy's new feature!

Do you need to anonymise #dream reports? We've got you covered, with anonymized, a 1-line solution to find and replace entities.

Try it out in our tutorial section, or online
colab.research.google.com/drive/14hHRRC3…
May 23, 2024 at 3:01 PM
Hello friends and #sleeppeeps, later this month I will be giving an talk on my (and relevant) work on #NLP, #LLMs and #dreams. Catch it of you're interested
April 11, 2024 at 6:39 PM
Lastly, we tested if our model was robust to OoD unlabeled data from a subject with a diagnosed PTSD (a Veteran of the Vietnam War), and found that the model’s prediction fit the *expected* emotion distribution, without simply mimicking the training distribution.
January 31, 2024 at 8:05 PM
We also conducted an ablation experiment, to understand if the performance was influenced by memorisation or implicit statistics within different series (subsets of DreamBank), but found no significant evidence of these differences impacting the model.
January 31, 2024 at 8:04 PM
Our main results show a generally strong and stable performance across most single emotions and emotion sets, aside from a widespread poor performance for sadness.
January 31, 2024 at 8:04 PM
We hence reframed the task to suit the HVDC scoring method. Using a multi-label setting, we trained a model to predict if each of the 5 HVDC emotions was appearing independently!
January 31, 2024 at 8:03 PM
Preliminary experiments showed that binary predictions from an LLM pre-trained on sentiment analysis do not correlate with the general sentiment of a report, nor with single positive/negative emotions.
January 31, 2024 at 8:03 PM
Hello #sleeppeeps, happy to share that our paper "Automatic Annotation of #Dream Report’s Emotional Content with Large Language Models" was accepted for publication at #EACL2024 Computational Linguistic and Clinical Psychology Workshop! Here's a short 🧵
January 31, 2024 at 8:02 PM
Quite amazing to see people are already using DReAMy in the their research at #WorldSleep2023 !!! Don't forget to stop by DReAMy's oral presentation (Room A07, 4:58) if you want to learn more about #LLMs for #dream reports analysis and annotation!
October 24, 2023 at 6:31 PM
Hello #sleeppeeps! #WorldSleep2023 is a day away. Don't forget to stop by our oral presentation. We'll talk about our experiments with #LLMs on #dream reports, the library itself, and how to use it. Don't #sleep on it 😀
October 19, 2023 at 3:39 PM
On a non-so-sciency note, we're planning on handing out some DReAMy stickers at #WorldSleep23 !
#scisky #sleeppeeps
October 12, 2023 at 8:01 PM
Lastly, probably my favourite part: DReAMy!

Given the encouraging results, I wanted to empower the dream research community with these tools, as well as other useful classic NLP tools. So I built a fully open source python library, designed for non-expert users.
October 6, 2023 at 4:59 PM
More recently, I was at #eSLEEPEurope23 virtual congress, with a poster on doting Generative LLMs to produce interpretable Hall & Van de Castle feature!
October 6, 2023 at 4:46 PM
The work also found some interesting evidence that partially fit into the relevant literature. For example, blind participants did write significantly different reports, which were easier to model for GPT-2.
October 6, 2023 at 4:43 PM
This evidence is even more clear in a new version of the manuscript that is currently under work.

Given items with the same amount of words, the perplexity scores are (on average) notably lower (hence better).
October 6, 2023 at 4:39 PM
Crucially, the proposed model does not seem to learn specific emotion-distribution associated with specific subsets of Dreambank (i.e., its Series), as demonstrated by the ablation experiment!
October 6, 2023 at 4:29 PM
Hello #sleeppeeps and #SciSky! Later this week, DReAMy and I will be at #eSleepEurope23, with a poster on how to train generative #LLMs to annotate #dream reports for different Hall & Van de Castle features. You can find models and demo at huggingface.co/DReAMy-lib
October 3, 2023 at 6:29 AM