Why am I here? Scouting for a new platform to discover and learn new papers (let’s see if it’s the one)
- Bisecle is compatible with LLMs from 1B to 13B, introducing only a small number of additional parameters and computational cost.
- Bisecle can achieve superior performance even in low-resource settings.
- Bisecle is compatible with LLMs from 1B to 13B, introducing only a small number of additional parameters and computational cost.
- Bisecle can achieve superior performance even in low-resource settings.
- Bisecle establishes a new SOTA results surpassing others in both accuracy (+15.79%) and forgetting reduction (8.49% lower Forgetting rate).
- Our method Bisecle consistently outperforms others, indicating strong robustness even when training data is limited.
- Bisecle establishes a new SOTA results surpassing others in both accuracy (+15.79%) and forgetting reduction (8.49% lower Forgetting rate).
- Our method Bisecle consistently outperforms others, indicating strong robustness even when training data is limited.
- multi-directional supervision mechanism improves knowledge preservation.
- contrastive prompt learning scheme is designed to isolate task-specific knowledge to facilitate efficient memory storage, and to explicitly mitigate update conflict.
- multi-directional supervision mechanism improves knowledge preservation.
- contrastive prompt learning scheme is designed to isolate task-specific knowledge to facilitate efficient memory storage, and to explicitly mitigate update conflict.
The first author - 1st year student Mehdi Jafari is attending his first academic conference #ACL2025.
The first author - 1st year student Mehdi Jafari is attending his first academic conference #ACL2025.
* LLMs Can Represent and Retain ToM-related Constructs: The study investigated whether LLMs could represent and retain ToM-related constructs and found evidence supporting this ability.
* ToM-informed Alignment Improves Response Quality:
* LLMs Can Represent and Retain ToM-related Constructs: The study investigated whether LLMs could represent and retain ToM-related constructs and found evidence supporting this ability.
* ToM-informed Alignment Improves Response Quality:
a) The extent to which the activation space of LLMs represents ToM of interlocutors,
b) Whether these representations form a consistent model of ToM,
and
c) How can we leverage ToM-related features to generate more aligned responses?
a) The extent to which the activation space of LLMs represents ToM of interlocutors,
b) Whether these representations form a consistent model of ToM,
and
c) How can we leverage ToM-related features to generate more aligned responses?
Using ToM, we can analyse interlocutor behaviours based on the understanding of their mental and emotional states.
Using ToM, we can analyse interlocutor behaviours based on the understanding of their mental and emotional states.
Round 2 of Brick by Brick 2024 has commenced! To join: www.aicrowd.com/challenges/b...
Round 2 of Brick by Brick 2024 has commenced! To join: www.aicrowd.com/challenges/b...
Our NeurIPS 2024 paper includes both a multi-label classification benchmark and a zero-shot forecasting benchmark.
neurips.cc/virtual/2024...
Our NeurIPS 2024 paper includes both a multi-label classification benchmark and a zero-shot forecasting benchmark.
neurips.cc/virtual/2024...
-- requires models to manage hierarchical dependencies and ensure consistency.
-- requires models to manage hierarchical dependencies and ensure consistency.
BTS also includes a KG that captures the relationships between TS and their physical, logical, and virtual entities.
Making it a great case for Hierarchical Multi-Label Classification. The TS are to be classified across nested categories (e.g. Point>Sensor>Air Quality>CO2).
BTS also includes a KG that captures the relationships between TS and their physical, logical, and virtual entities.
Making it a great case for Hierarchical Multi-Label Classification. The TS are to be classified across nested categories (e.g. Point>Sensor>Air Quality>CO2).