Opinions expressed are my own
📍Pittsburgh, USA 🔗 akashsharma02.github.io
@aiatmeta.bsky.social &
@cmurobotics.bsky.social : @carohiguera.bsky.social ,
@mukadammh, @francois_hogan
and many others
Code coming soon! Checkout our paper: arxiv.org/abs/2505.11420 &
Website: akashsharma02.github.io/sparsh-skin-... for more details
6/6
@aiatmeta.bsky.social &
@cmurobotics.bsky.social : @carohiguera.bsky.social ,
@mukadammh, @francois_hogan
and many others
Code coming soon! Checkout our paper: arxiv.org/abs/2505.11420 &
Website: akashsharma02.github.io/sparsh-skin-... for more details
6/6
5/6
5/6
4/6
4/6
Decorrelate signals by tokenizing 1s window of tactile data.
Condition the encoder on robot hand configurations via sensor positions as input
3/6
Decorrelate signals by tokenizing 1s window of tactile data.
Condition the encoder on robot hand configurations via sensor positions as input
3/6
It improves tactile tasks by over 56% in end-to-end methods and by over 41% in prior work.
It is trained via self-supervision for the Xela sensor, so no labeled data needed!
2/6
It improves tactile tasks by over 56% in end-to-end methods and by over 41% in prior work.
It is trained via self-supervision for the Xela sensor, so no labeled data needed!
2/6
1. Sparsh: tactile reps for vision based sensors sparsh-ssl.github.io
2. [Releasing soon] Sparsh-skin: Tactile reps for full hand magnetic skins
3. [Coming soon] Reps for multimodal touch fusing tactile-images, audio, motion and pressure
1. Sparsh: tactile reps for vision based sensors sparsh-ssl.github.io
2. [Releasing soon] Sparsh-skin: Tactile reps for full hand magnetic skins
3. [Coming soon] Reps for multimodal touch fusing tactile-images, audio, motion and pressure