Neurosymbolic AI, Generative Modeling
https://sbadredd.github.io/
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. 🔨
🔨 Rank bottlenecks in KGEs:
At Friday's "Salon des Refuses" I will present @sbadredd.bsky.social 's new work on how rank bottlenecks limit knowledge graph embeddings
arxiv.org/abs/2506.22271
We show how linearity prevent KGEs from scaling to larger graphs + propose a simple solution using a Mixture of Softmaxes (see LLM literature) to break the limitations at a low parameter cost. 🔨
🧠 Neurosymbolic Diffusion Models: Thursday's poster session.
Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
🧠 Neurosymbolic Diffusion Models: Thursday's poster session.
Going to NeurIPS? @edoardo-ponti.bsky.social and @nolovedeeplearning.bsky.social will present the paper in San Diego Thu 13:00
arxiv.org/abs/2505.13138
🏆Rodrigo de Salvo Braz was here to accept the award.
This is groundwork for recent NeSy approaches like DeepSeaProbLog and the probabilistic algebraic layer.
🏆Rodrigo de Salvo Braz was here to accept the award.
This is groundwork for recent NeSy approaches like DeepSeaProbLog and the probabilistic algebraic layer.
We use a path-based RL model to generate explanations for predictions. We do so by rewarding path properties that make good explanations 〰️
Great work led by Susana Nunes & @catiapesquita.bsky.social 👏
👉 arxiv.org/pdf/2509.02276
We use a path-based RL model to generate explanations for predictions. We do so by rewarding path properties that make good explanations 〰️
Great work led by Susana Nunes & @catiapesquita.bsky.social 👏
👉 arxiv.org/pdf/2509.02276
If you are around and want to chat about these topics, let's meet for coffee!
If you are around and want to chat about these topics, let's meet for coffee!
Read more 👇
Read more 👇
We are looking forward to your works on:
🔌 #circuits and #tensor #networks 🕸️
⏳ normalizing #flows 💨
⚖️ scaling #NeSy #AI 🦕
🚅 fast and #reliable inference 🔍
...& more!
please share 🙏
🙌 Our special track on the Journal of Artificial Intelligence Research (JAIR) about "Integration of Logical Constraints in Deep Learning" is the right venue for you! 🙌
CfP: www.jair.org/index.php/ja...
Deadline: May 31, 2025
🙌 Our special track on the Journal of Artificial Intelligence Research (JAIR) about "Integration of Logical Constraints in Deep Learning" is the right venue for you! 🙌
CfP: www.jair.org/index.php/ja...
Deadline: May 31, 2025