🚚 Moving threads about my #nlp papers from Twitter to here 🚚
How and when, and with which issues, does the text summarization community engage with responsible AI? 🤔 In this #EMNLP2023 paper, we examine reporting and research practices across 300 summarization papers published between 2020-2022 🧵
How and when, and with which issues, does the text summarization community engage with responsible AI? 🤔 In this #EMNLP2023 paper, we examine reporting and research practices across 300 summarization papers published between 2020-2022 🧵
November 12, 2024 at 9:46 PM
🚚 Moving threads about my #nlp papers from Twitter to here 🚚
How and when, and with which issues, does the text summarization community engage with responsible AI? 🤔 In this #EMNLP2023 paper, we examine reporting and research practices across 300 summarization papers published between 2020-2022 🧵
How and when, and with which issues, does the text summarization community engage with responsible AI? 🤔 In this #EMNLP2023 paper, we examine reporting and research practices across 300 summarization papers published between 2020-2022 🧵
妹がEMNLP2023のナップサックでフェスに出かけたのわろ
June 23, 2024 at 12:03 AM
妹がEMNLP2023のナップサックでフェスに出かけたのわろ
A paper on the topic by Max Glockner, Ieva Raminta Staliūnaitė, James Thorne, Gisela Vallejo, Andreas Vlachos and Iryna Gurevych was accepted to TACL and has just been presented at #EMNLP2023.
📄 arxiv.org/abs/2104.00640
➡️ bsky.app/profile/ukpl...
📄 arxiv.org/abs/2104.00640
➡️ bsky.app/profile/ukpl...
December 21, 2023 at 10:21 AM
A paper on the topic by Max Glockner, Ieva Raminta Staliūnaitė, James Thorne, Gisela Vallejo, Andreas Vlachos and Iryna Gurevych was accepted to TACL and has just been presented at #EMNLP2023.
📄 arxiv.org/abs/2104.00640
➡️ bsky.app/profile/ukpl...
📄 arxiv.org/abs/2104.00640
➡️ bsky.app/profile/ukpl...
At #EMNLP2023, our colleague Jonathan Tonglet (@tongletj.bsky.social) presented his master thesis, conducted at the KU Leuven. Find out more about »SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQA« in the thread 🧵 below:
➡️ bsky.app/profile/ukpl...
➡️ bsky.app/profile/ukpl...
December 19, 2023 at 11:15 AM
At #EMNLP2023, our colleague Jonathan Tonglet (@tongletj.bsky.social) presented his master thesis, conducted at the KU Leuven. Find out more about »SEER : A Knapsack approach to Exemplar Selection for In-Context HybridQA« in the thread 🧵 below:
➡️ bsky.app/profile/ukpl...
➡️ bsky.app/profile/ukpl...
Many models produce outputs that are hard to verify for an end-user. Our new #emnlp2023 paper won an outstanding paper award! 🏆🎉
We show that providing a quality estimation model, can make a user better at deciding when to rely on the model.
Paper: arxiv.org/pdf/2310.169...
We show that providing a quality estimation model, can make a user better at deciding when to rely on the model.
Paper: arxiv.org/pdf/2310.169...
December 12, 2023 at 4:54 AM
Many models produce outputs that are hard to verify for an end-user. Our new #emnlp2023 paper won an outstanding paper award! 🏆🎉
We show that providing a quality estimation model, can make a user better at deciding when to rely on the model.
Paper: arxiv.org/pdf/2310.169...
We show that providing a quality estimation model, can make a user better at deciding when to rely on the model.
Paper: arxiv.org/pdf/2310.169...
thanks @annargrs reminding me of this work in great your #EMNLP2023 talk at GenBench 🇸🇬
November 19, 2024 at 7:28 PM
thanks @annargrs reminding me of this work in great your #EMNLP2023 talk at GenBench 🇸🇬
A group photo from the poster presentation of »AmbiFC: Fact-Checking Ambiguous Claims with Evidence«, co-authored by our colleague Max Glockner, Ieva Staliūnaitė, James Thorne, Gisela Vallejo, Andreas Vlachos and Iryna Gurevych. #EMNLP2023
December 11, 2023 at 10:39 AM
A group photo from the poster presentation of »AmbiFC: Fact-Checking Ambiguous Claims with Evidence«, co-authored by our colleague Max Glockner, Ieva Staliūnaitė, James Thorne, Gisela Vallejo, Andreas Vlachos and Iryna Gurevych. #EMNLP2023
A successful EMNLP Meeting has come to an end! A group photo of our colleagues Yongxin Huang, @tongletj.bsky.social, Aniket Pramanick, Sukannya Purkayastha, Dominic Petrak and Max Glockner, who represented the UKP Lab in Singapore! #EMNLP2023
December 11, 2023 at 9:30 AM
A successful EMNLP Meeting has come to an end! A group photo of our colleagues Yongxin Huang, @tongletj.bsky.social, Aniket Pramanick, Sukannya Purkayastha, Dominic Petrak and Max Glockner, who represented the UKP Lab in Singapore! #EMNLP2023
Looking forward to the final day of EMMLP! Let me know if you want to chat about our Findings paper: “Emergent inabilities? Inverse scaling over the course of pretraining” arxiv.org/abs/2305.14681 #EMNLP #EMNLP2023
December 10, 2023 at 1:07 AM
Looking forward to the final day of EMMLP! Let me know if you want to chat about our Findings paper: “Emergent inabilities? Inverse scaling over the course of pretraining” arxiv.org/abs/2305.14681 #EMNLP #EMNLP2023
got a dataset and want a to talk?
@LChoshen and I are in #EMNLP2023 🇸🇬
@LChoshen and I are in #EMNLP2023 🇸🇬
November 19, 2024 at 7:28 PM
got a dataset and want a to talk?
@LChoshen and I are in #EMNLP2023 🇸🇬
@LChoshen and I are in #EMNLP2023 🇸🇬
You can find our paper here:
📃https://arxiv.org/abs/2311.00408
and our code here:
💻https://github.com/UKPLab/AdaSent
Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (7/🧵) #EMNLP2023
📃https://arxiv.org/abs/2311.00408
and our code here:
💻https://github.com/UKPLab/AdaSent
Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (7/🧵) #EMNLP2023
December 9, 2023 at 10:46 AM
You can find our paper here:
📃https://arxiv.org/abs/2311.00408
and our code here:
💻https://github.com/UKPLab/AdaSent
Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (7/🧵) #EMNLP2023
📃https://arxiv.org/abs/2311.00408
and our code here:
💻https://github.com/UKPLab/AdaSent
Check out the work of our authors Yongxin Huang, Kexin Wang, Sourav Dutta, Raj Nath Patel, Goran Glavaš and Iryna Gurevych! (7/🧵) #EMNLP2023
If the base PLM is domain-adapted with another loss, the adapter won’t be compatible any more, reflected in a performance drop. (6/🧵) #EMNLP2023
December 9, 2023 at 10:44 AM
If the base PLM is domain-adapted with another loss, the adapter won’t be compatible any more, reflected in a performance drop. (6/🧵) #EMNLP2023
What makes the difference 🧐 ?
We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. (5/🧵) #EMNLP2023
We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. (5/🧵) #EMNLP2023
December 9, 2023 at 10:44 AM
What makes the difference 🧐 ?
We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. (5/🧵) #EMNLP2023
We attribute the effectiveness of the sentence encoding adapter to the consistency between the pre-training and DAPT objectives of the base PLM. (5/🧵) #EMNLP2023
AdaSent decouples DAPT and SEPT by storing the sentence encoding abilities into an adapter, which is trained only once in the general domain and plugged into various DAPT-ed PLMs. It can match or surpass the performance of DAPT→SEPT, with more efficient training. (4/🧵) #EMNLP2023
December 9, 2023 at 10:43 AM
AdaSent decouples DAPT and SEPT by storing the sentence encoding abilities into an adapter, which is trained only once in the general domain and plugged into various DAPT-ed PLMs. It can match or surpass the performance of DAPT→SEPT, with more efficient training. (4/🧵) #EMNLP2023
Domain-adapted sentence embeddings can be created by applying general-domain SEPT on top of a domain-adapted base PLM (DAPT→SEPT). But this requires the same SEPT procedure to be done on each DAPT-ed PLM for every domain, resulting in computational inefficiency. (3/🧵) #EMNLP2023
December 9, 2023 at 10:43 AM
Domain-adapted sentence embeddings can be created by applying general-domain SEPT on top of a domain-adapted base PLM (DAPT→SEPT). But this requires the same SEPT procedure to be done on each DAPT-ed PLM for every domain, resulting in computational inefficiency. (3/🧵) #EMNLP2023
In our #EMNLP2023 paper we demonstrate AdaSent's effectiveness in extensive experiments on 17 different few-shot sentence classification datasets! It matches or surpasses the performance of full SEPT on DAPT-ed PLM (DAPT→SEPT) while substantially reducing training costs. (2/🧵)
December 9, 2023 at 10:42 AM
In our #EMNLP2023 paper we demonstrate AdaSent's effectiveness in extensive experiments on 17 different few-shot sentence classification datasets! It matches or surpasses the performance of full SEPT on DAPT-ed PLM (DAPT→SEPT) while substantially reducing training costs. (2/🧵)
Need a lightweight solution for few-shot domain-specific sentence classification?
We propose AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023
We propose AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023
December 9, 2023 at 10:42 AM
Need a lightweight solution for few-shot domain-specific sentence classification?
We propose AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023
We propose AdaSent!
🚀 Up to 7.2 acc. gain in 8-shot classification with 10K unlabeled data
🪶 Small backbone with 82M parameters
🧩 Reusable general sentence adapter across domains
(1/🧵) #EMNLP2023
Which factors shape #NLProc research over time? This was the topic of the talk by our colleague Aniket Pramanick at #EMNLP2023!
Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📄 arxiv.org/abs/2305.12920
Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📄 arxiv.org/abs/2305.12920
December 9, 2023 at 10:12 AM
Which factors shape #NLProc research over time? This was the topic of the talk by our colleague Aniket Pramanick at #EMNLP2023!
Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📄 arxiv.org/abs/2305.12920
Learn more about the paper by him, Yufang Hou, Saif M. Mohammad & Iryna Gurevych here: 📄 arxiv.org/abs/2305.12920
Some new theoretical and empirical results from Tiago, Clara Meister, @wegotlieb.bsky.social, me, and Ryan Cotterell on surprisal and word lengths. I was particularly intrigued to see that surprisal from better LMs correlate less with word length than from worse LMs. #EMNLP2023
Are you interested in word lengths and natural language’s efficiency? If yes, check out our new #EMNLP2023 paper! It has everything you need: drama, suspense, a new derivation of Zipf’s law, an update to Piantadosi et al’s classic word length paper, transformers... 😄
arxiv.org/abs/2312.03897
arxiv.org/abs/2312.03897
December 8, 2023 at 5:49 PM
Some new theoretical and empirical results from Tiago, Clara Meister, @wegotlieb.bsky.social, me, and Ryan Cotterell on surprisal and word lengths. I was particularly intrigued to see that surprisal from better LMs correlate less with word length than from worse LMs. #EMNLP2023
Are you interested in word lengths and natural language’s efficiency? If yes, check out our new #EMNLP2023 paper! It has everything you need: drama, suspense, a new derivation of Zipf’s law, an update to Piantadosi et al’s classic word length paper, transformers... 😄
arxiv.org/abs/2312.03897
arxiv.org/abs/2312.03897
December 8, 2023 at 5:46 PM
Are you interested in word lengths and natural language’s efficiency? If yes, check out our new #EMNLP2023 paper! It has everything you need: drama, suspense, a new derivation of Zipf’s law, an update to Piantadosi et al’s classic word length paper, transformers... 😄
arxiv.org/abs/2312.03897
arxiv.org/abs/2312.03897
If you are around at #EMNLP2023, look out for our colleague Sukannya Purkayastha, who presented today our paper on the use of Jiu-Jitsu argumentation in #PeerReview, authored by her, Anne Lauscher (Universität Hamburg) and Iryna Gurevych.
📑 arxiv.org/abs/2311.03998
📑 arxiv.org/abs/2311.03998
December 8, 2023 at 11:07 AM
If you are around at #EMNLP2023, look out for our colleague Sukannya Purkayastha, who presented today our paper on the use of Jiu-Jitsu argumentation in #PeerReview, authored by her, Anne Lauscher (Universität Hamburg) and Iryna Gurevych.
📑 arxiv.org/abs/2311.03998
📑 arxiv.org/abs/2311.03998
Check out the full paper on arXiv and the code on GitLab – we look forward to your thoughts and feedback! (9/9) #NLProc #eRisk #EMNLP2023
Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/s...
Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/s...
December 8, 2023 at 10:26 AM
Check out the full paper on arXiv and the code on GitLab – we look forward to your thoughts and feedback! (9/9) #NLProc #eRisk #EMNLP2023
Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/s...
Paper 📄 arxiv.org/abs/2211.07624
Code ⌨️ gitlab.irlab.org/anxo.pvila/s...
We also illustrate how our semantic retrieval pipeline provides interpretability of the symptom estimation, highlighting the most relevant sentences. (8/🧵) #EMNLP2023
December 8, 2023 at 10:25 AM
We also illustrate how our semantic retrieval pipeline provides interpretability of the symptom estimation, highlighting the most relevant sentences. (8/🧵) #EMNLP2023
Our approaches achieve good performance in two Reddit benchmark collections (DCHR metric). (7/🧵) #EMNLP2023
December 8, 2023 at 10:25 AM
Our approaches achieve good performance in two Reddit benchmark collections (DCHR metric). (7/🧵) #EMNLP2023
With this aim, we introduce two data selection strategies to detect representative sentences, both unsupervised & semi-supervised.
For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵) #EMNLP2023
For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵) #EMNLP2023
December 8, 2023 at 10:25 AM
With this aim, we introduce two data selection strategies to detect representative sentences, both unsupervised & semi-supervised.
For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵) #EMNLP2023
For the latter, we propose an annotation schema to obtain relevant training samples. (6/🧵) #EMNLP2023