↑Lionel Yelibi↓ @ neurips2025
banner
spiindoctor.bsky.social
↑Lionel Yelibi↓ @ neurips2025
@spiindoctor.bsky.social
Research Scientist. Houston, TX.
Research interests: Complexity Sciences, Matrix Decomposition, Clustering, Manifold Learning, Networks, Synthetic (numerical) data, Portfolio optimization. 🇨🇮🇿🇦
Pinned
Weekend project on signal separation: How can we isolate a weak, nonlinear signal when it's mixed with a dominant, linear one?
Reposted by ↑Lionel Yelibi↓ @ neurips2025
The enshitification of scientific publishing marches on. First arXiv had to clamp down on submissions thanks to a flood of AI-written papers.

Now ICLR authors say their peer reviews were churned out by AI. Reviews full of hallucinated content that didn’t exist and offering useless, vague feedback.
Major AI conference flooded with peer reviews written fully by AI
Controversy has erupted after 21% of manuscript reviews for an international AI conference were found to be generated by artificial intelligence.
www.nature.com
November 30, 2025 at 4:38 PM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
#NeurIPS2025 paper: Scalable Feature Learning on Huge Knowledge Graphs for Downstream Machine Learning

Combining contrastive learning and message passing markedly improves features created from embedding graphs, scalable to huge graphs.
It taught us a lot on graph feature learning 👇
1/10
November 28, 2025 at 3:46 PM
I hope you guys did not forget this masterpiece. We have come so far!
November 28, 2025 at 10:23 PM
the chapters of a phd dissertation don't actually need to be strongly connected. It gives me peace of mind.
November 28, 2025 at 12:17 AM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
Our Montefiore Science with AI lab will be at #NeurIPS2025 presenting 1 paper at the main conference and 3 papers at workshops. If you are attending, feel free to reach out with the crew to discuss science, AI, or just to say hi! (I won't attend this year unfortunately 🌱)
November 27, 2025 at 1:37 PM
okay so technically the sheer quantity of posters, at this point, requires you to use an LLM to sort through the ones which might be interesting to check out? I spent 30 mins doing ctrl+F with a bunch of keywords. It's a bit tedious
#neurips2025
November 27, 2025 at 6:32 AM
each poster session at #neurips2025 has about 1000 posters. stupid question but isn't that too much?
November 27, 2025 at 5:11 AM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
Competition between simple and complex contagion on temporal networks
Competition between simple and complex contagion on temporal networks
Elsa Andres, Romualdo Pastor-Satorras, Michele Starnini, and Márton Karsai Phys. Rev. Research 7, 043088 Behavioral adoptions of individuals are influenced by their peers in different ways. While in some cases an individual may change behavior after a single incoming influence, in other cases multiple cumulated attempts of social influence are necessary for the same outcome. These two mechanisms, known as simple and complex contagion, often occur together in social contagion phenomena, yet their distinguishability based on the observable contagion dynamics is challenging. In this paper we define a social contagion model evolving on temporal networks where individuals can switch between contagion mechanisms. We explore three spreading scenarios: predominated by simple or complex contagion, or where the dominant mechanism changes during the unfolding process. We propose analytical and numerical methods relying on global spreading observables to identify which of these three scenarios characterizes a social spreading outbreak. This work offers insights into social contagion dynamics on temporal networks, without assuming prior knowledge about the contagion mechanism driving the adoptions of individuals. Read the full article at: link.aps.org
sco.lt
November 26, 2025 at 11:03 PM
okay so clearly the better platform for neurips2025 is not going to be this one :(
November 26, 2025 at 4:01 PM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
New study by @iaciac.bsky.social + co-authors and published on 𝘯𝘱𝘫 𝘴𝘤𝘪𝘦𝘯𝘤𝘦 𝘰𝘧 𝘧𝘰𝘰𝘥 models global cuisines as networks of ingredient pairings, revealing unique culinary signatures and patterns, with AI models able to identify a cuisine from just a few recipes.
www.nature.com/articles/s41...
The networks of ingredient combinations as culinary fingerprints of world cuisines - npj Science of Food
npj Science of Food - The networks of ingredient combinations as culinary fingerprints of world cuisines
www.nature.com
November 26, 2025 at 12:12 AM
They were never gone.
November 26, 2025 at 2:21 AM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
Yale University | Postdoc and PhD fellowships on linguistics, cognitive science, and AI - Application open on a rolling basis
📆 Nov 25, 2025
Home
Overview of the Position: The Yale Department of Linguistics seeks candidates for a Postdoctoral Associate in Computational Linguistics, who would work under the guidance of Professor Tom McCoy. Applicants...
rtmccoy.com
November 24, 2025 at 2:55 PM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
Our team just released a comprehensive and accessible review of Signed Networks — two years in the making! Theory, methods, applications, all in one place. Feedback welcome.
arxiv.org/abs/2511.17247
Signed Networks: theory, methods, and applications
Signed networks provide a principled framework for representing systems in which interactions are not merely present or absent but qualitatively distinct: friendly or antagonistic, supportive or confl...
arxiv.org
November 24, 2025 at 10:08 AM
Just made quiet posters my main tab, the discover tab sucks.
November 24, 2025 at 5:05 AM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
Being Interdisciplinary feels like practicing non-attachment. Different disciplines come in & out of focus in waves, each a whole world. Engaging with philosophy gives access to different ontologies than engaging in neuroscience or AI. This helps us evaluate each field from within & outside itself.
November 23, 2025 at 11:35 PM
Actually no. You need confirmation. With grok we have witnessed it. With other models you actually have to do this work instead of jumping to conclusions which maybe baseless.
If one model is transparently manipulated, you should assume the others are manipulated — just more skillfully.

Grok is sloppy about it.
Other companies are subtle about it.
The only difference is competence, not intent.
November 24, 2025 at 1:56 AM
totally random but now that I'm touching on sturm liouville, learning about operators, seeing the connection with the schrodinger equation, being much more mature about linear algebra than I was in my 2011 ugrad quantum mechanics. I feel like they throw a lot at students in that course 🫠
November 23, 2025 at 6:17 PM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
I had a popular account with a valuable audience and my Twitter payout was $80 a month-ish, to the point where I disabled monetization instead of uploading my ID. Payouts are only material if you live in a developing country, so “guy in Nigeria posting right-wing Amerislop” has taken over the site.
Twitter pays people based on engagement (views, retweets, comments, etc). It appears that many MAGA accounts are based abroad and they use AI technology to generate low-effort rage bait.

My guess is that this will get worse as AI tech improves. For instance, fake videos of minorities doing crime.
November 23, 2025 at 3:31 PM
The discover tab on this app has way too much politics
November 23, 2025 at 8:46 AM
During my Msc I experimented with genetic algos and simulated annealing. Genetic algos were so slow which is why I am confused whenever they're mentioned with neural networks given how scalable gradient based optimization has been? I also am a bystander in that arena so...
Evolutionary Algorithms for optimizing LLM weights

Gradient descent and backpropagation have a lot of problems, alignment becomes a nightmare. Evolutionary algos fix this, but they don’t scale

A recent paper, EGGROLL, makes it computationally feasible to do now

www.alphaxiv.org/abs/2511.16652
November 23, 2025 at 8:41 AM
You don't have to worry about where I am tweeting from.
November 23, 2025 at 8:00 AM
Need more tech crypto and finance bros on this platform. Badly.
November 23, 2025 at 1:26 AM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
Factor Learning Portfolio Optimization Informed by Continuous-Time Finance Models

Sinong Geng, houssam nassif, Zhaobin Kuang, Anders Max Reppen, K. Ronnie Sircar

Action editor: Reza Babanezhad Harikandeh

https://openreview.net/forum?id=KLOJUGusVE

#portfolio #finance #financial
November 21, 2025 at 5:18 AM
Reposted by ↑Lionel Yelibi↓ @ neurips2025
One thing about PCA/embeddings/political-leaning that should get more attention is the role of zero or “the origin”. It’s often special in a way that depends upon how you do the embedding.

This post is a good example of that.

Once you accept that the origin is special, then….
November 20, 2025 at 6:28 PM
Timeline full of people losing their minds because markets are tanking.
November 20, 2025 at 7:31 PM