Speaking 🇫🇷, English and 🇨🇱 Spanish | Living in Tübingen 🇩🇪 | he/him
https://gubri.eu
4/4
🦹💥 We explore how to detect if an LLM was stolen or leaked🤖💥
We showcase how to use adversarial prompt as #fingerprint for #LLM.
A thread 🧵
⬇️⬇️⬇️
4/4
3/
3/
2/
2/
I have a few questions:
- Is the ellipse signature robust to noise added to the logits?
- can we compute the signature if we only have access to the top-k logits?
1/
I have a few questions:
- Is the ellipse signature robust to noise added to the logits?
- can we compute the signature if we only have access to the top-k logits?
1/
<insert name> should be correct. But in reality, that is rarely true.
<insert name> should be correct. But in reality, that is rarely true.
They consider <10B.
Personally, I would not consider 13B models to be SLMs (not even 7B). They require quite a lot of resources without using aggressive efficient inference techniques (like 4 bits quantization).
They consider <10B.
Personally, I would not consider 13B models to be SLMs (not even 7B). They require quite a lot of resources without using aggressive efficient inference techniques (like 4 bits quantization).
- arxiv.org/abs/2310.08419
- arxiv.org/abs/2312.02119
- arxiv.org/abs/2502.01633
- arxiv.org/abs/2310.08419
- arxiv.org/abs/2312.02119
- arxiv.org/abs/2502.01633
Will you stay in Paris?
Will you stay in Paris?