More robustness and explainabilities 🧐 for Health AI.
shanchen.dev
However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications
However, I feel like the division is even further… currently, seems like RL is taking over LM post training and many NLProc are dealing with language model enabled new applications
Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.
More details will be provided in the post, and more details will be provided soon!
@JackGallifant
@oldbayes.bsky.social
@daniellebitterman.bsky.social
Here, we found some early evidence that SAE features trained on language models are still meaningful to LLaVA.
More details will be provided in the post, and more details will be provided soon!
@JackGallifant
@oldbayes.bsky.social
@daniellebitterman.bsky.social
@jannahastings.bsky.social
@daniellebitterman.bsky.social
And all our awesome collaborators who are not on the right platform yet! 🦋
Happy Thanksgiving! 🍂
@jannahastings.bsky.social
@daniellebitterman.bsky.social
And all our awesome collaborators who are not on the right platform yet! 🦋
Happy Thanksgiving! 🍂
All our data can be downloaded from our website: crosscare.net
All our data can be downloaded from our website: crosscare.net