Matteo Paloni
@betterwithchem.bsky.social
470 followers 400 following 12 posts
PDRA at MME-UCL, previously PD at CBS-Montpellier ( @cbsmontpellier.bsky.social). Interested in MD simulations of order and chaos. https://scholar.google.com/citations?user=fK55KfEAAAAJ&hl=en
Posts Media Videos Starter Packs
Pinned
betterwithchem.bsky.social
New preprint by @lavagnae.bsky.social from Barducci lab (@cbsmontpellier.bsky.social): we show how energy-consuming enzymatic reactions, like phosphorylation, regulate biomolecular condensates with complex effects on condensate stability and reactive interfaces. Any feedback welcome!
biorxiv-biophys.bsky.social
Uncovering the thermodynamic principles of enzymaticregulation in biomolecular condensates with reactivesimulations https://www.biorxiv.org/content/10.1101/2025.10.13.682073v1
betterwithchem.bsky.social
New preprint by @lavagnae.bsky.social from Barducci lab (@cbsmontpellier.bsky.social): we show how energy-consuming enzymatic reactions, like phosphorylation, regulate biomolecular condensates with complex effects on condensate stability and reactive interfaces. Any feedback welcome!
biorxiv-biophys.bsky.social
Uncovering the thermodynamic principles of enzymaticregulation in biomolecular condensates with reactivesimulations https://www.biorxiv.org/content/10.1101/2025.10.13.682073v1
Reposted by Matteo Paloni
carlzimmer.com
Today my @nytimes.com colleagues and I are launching a new series called Lost Science. We interview US scientists who can no longer discover something new about our world, thanks to this year‘s cuts. Here is my first interview with a scientist who studied bees and fires. Gift link: nyti.ms/3IWXbiE
nyti.ms
Reposted by Matteo Paloni
giuliotesei.bsky.social
I'm hiring for a PhD position at Malmö University, Sweden!

The project will focus on molecular modelling of proteins, lipids, and biomolecular condensates at cell membranes.

More details and application form: tinyurl.com/4zm92365

Please feel free to share!

@vetenskapsradet.bsky.social | @mau.se
Snapshot of a condensate near a lipid membrane with Swedish Research Council and Malmö University logos.
Reposted by Matteo Paloni
karl-jacoby.bsky.social
Only an administration intent on committing war crimes in the present and future would stoop to calling Wounded Knee a "battle" rather than what it truly was: a massacre of over 250 Lakotas, mainly women, children, and the elderly. 1/
Reposted by Matteo Paloni
Reposted by Matteo Paloni
cbsmontpellier.bsky.social
Lancement ce jour du projet d'extension de notre laboratoire. 1100m² de nouveaux espaces et 280m² rehabilités.
Merci pour leur soutien à nos tutelles et nos partenaires.
@inserm.fr @cnrs.fr @umontpellier.bsky.social
@occitanie.bsky.social @villedemontpellier.bsky.social
& l'Etat
Reposted by Matteo Paloni
drewharwell.com
The Kimmel video: "The MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them, and doing everything they can to score political points from it."
Reposted by Matteo Paloni
chercheurjuteux.com
Fascisme: encore un exemple.

Faire disparaître tout ce qui contredit le Führer Trump.
Reposted by Matteo Paloni
chercheurjuteux.com
C'est bon là vous avez vu les signes du fascisme ou c'est toujours pas assez...
Reposted by Matteo Paloni
motomatters.com
This ChatGPT prompt could have been a Bash script
Reposted by Matteo Paloni
edzitron.com
At the heart of gen AI sits a massive economic problem: LLMs are too expensive to charge a monthly fee for, and that is what every single LLM company charges. This will never work.
www.wheresyoured.at/why-everybody-is-losing-money-on-ai/#what-if-it-isnt-possible-to-make-a-profitable-ai-company
Generative AI does not fit the classic software pricing model

One of the classic booster arguments is that we're in the "growth stage" of generative AI, where companies "charge a lower price to get people through the door" before cranking up prices, the so-called "profit-lever" that every lazy journalist claims exist for any product economics they don't want to think about too hard.

The problem, I'm afraid, is that generative AI does not match the traditional pricing model for software, which is predominantly sold at a monthly flat price per month.

In traditional software, a "user" generally doesn't cost the developer that much money. One of the big benefits of selling software at scale is that the costs don't scale with you. A web-based application may have an associated cost, but they run on significantly cheaper and widely-available servers, with operations run across less-demanding CPUs, beefier versions of the ones you'd find in your laptop or desktop computer.

Let me get specific. Microsoft Office 365 — one of Microsoft's most profitable business units — for the most part uses  CPU-based architectures for its compute, as a Word user, even using Microsoft's cloud-based apps, doesn't require a massive amount of power, because they’re effectively running cloud-based versions of consumer apps. The same goes for things like Google Workspace. Google makes billions of dollars selling access to software that effectively prints money, because the infrastructural burden is mostly "can I make sure this service is available all the time" rather than "do I have the specialized hardware to do so." Its  costs — even with power users — are relatively standardized. Large Language Models are an entirely different beast for several reasons, chief of which are that very few models actually exist on a single GPU, with instances "sharded" across multiple GPUs (such as eight H100s).

    Large Language Models require NVIDIA GPUs, meaning that any infrastructure provider must build specialized servers full of them to provide access to said model reliably regardless of a user's location.
    A Large Language Model user's infrastructural burden varies wildly between users and use cases. While somebody asking ChatGPT to summarize an email might not be much of a burden, somebody asking ChatGPT to review hundreds of pages of documents at once — a core feature of basically any $20-a-month subscription — could eat up eight GPUs at once.
        To be very clear, a user that pays $20-a-month could run multiple queries like this a month, and there's no real way to stop them.
    Unlike most software products, any errors in producing an output from a Large Language Model have a significant opportunity cost. When a user doesn't like an output, or the model gets something wrong, or the user realizes they forgot something, the model must make further generations, and even with caching (which Anthropic has added a toll to), there's a definitive cost attached to any mistake.
    Large Language Models, for the most part, lack definitive use cases, meaning that every user is (even with an idea of what they want to do) experimenting with every input and output. In doing so, they create the opportunity to burn more tokens, which in turn creates an infrastructural burn on GPUs, which cost a lot of money to run.
    The more specific the output, the more opportunities there are for monstrous token burn, and I'm specifically thinking about coding with Large Language Models. The token-heavy nature of generating code means that any mistakes, suboptimal generations or straight-up errors will guarantee further token burn. 
        Take a look at r/Cursor or a… This is the core problem of "hallucinations" within any Large Language Model. While many (correctly) dislike LLMs for their propensity to authoritatively state things that aren't true, the real hallucination problem is models subtly misunderstanding what a user wants, then subtly misunderstanding how to do it. As the complexity of a request increases, so too do the opportunities for these subtle mistakes, a problem that only compounds with the use of the reasoning models that are a requirement to make any coding LLM function (as they hallucinate more).

Every little "mistake" creates the opportunity for errors, which in turn creates the opportunity for the model to waste tokens generating something the user doesn't want or that will require the user to prompt the model again. And because LLMs do not have "thoughts" and are not capable of learning, there is no way for them to catch these errors.

In simpler terms, it's impossible to guarantee that a model will do anything specific, and any failure of a model to provide exactly what a user wants all but guarantees the user will ask the model to burn more tokens.

While this might be something you can mitigate when charging users based on their actual token consumption, most generative AI companies are charging users by the month, and the majority of OpenAI's revenue comes from selling monthly subscriptions. While one can rate limit a user, these limits are hard to establish in a way that actually mitigates how much a user can burn.
Reposted by Matteo Paloni
raistolo.bsky.social
“Is AI good for teaching? Let’s ask an entrepreneur whose entire business is based on that, instead of actual experts”
nytimes.com
A.I. is changing classrooms. We spoke to the co-founder of Alpha Schools about how her private K-12 schools are using A.I. to generate personalized lesson plans and enabling teachers to spend their time motivating rather than teaching students. nyti.ms/3JHIZub
MackKenzie Price, co-founder of Alpha Schools, in a dark blue shirt and orange pants sits on a wooden desk in front of a window, smiling slightly at the camera. Quote reads: "Our kids are crushing their academics, and they're doing it in a fraction of the time."
Reposted by Matteo Paloni
koroeder.bsky.social
#Postdoc opportunity to join my group at KCL to work on RNA modelling for 2 years (+1 year pot. extension). #CompChem #ChemPostDoc #PostDocJobs #RNA Apply by 17th September!

www.kcl.ac.uk/jobs/123725-...
Research Associate in Molecular Modelling of RNAs
www.kcl.ac.uk
Reposted by Matteo Paloni
samratmukhopadhyay.bsky.social
Happy to share our community note published in Nature Communications @natcomms.nature.com. In this 14-page article (34 authors from 9 countries), we discuss current practices in phase separation. We hope it will be useful to the biomolecular condensate community.
www.nature.com/articles/s41...
Reposted by Matteo Paloni
lindorfflarsen.bsky.social
If you want to learn about AWSEM, CALVADOS, Mpipi or FINCHES, I am not sure this is the best paper to read

But, hey, at least I learnt about a paper we apparently wrote in 2014 (ref 80)

FINCHES: A Computational Framework for Predicting Intermolecular Interactions in IDPs
doi.org/10.3390/ijms...
Reposted by Matteo Paloni
chercheurjuteux.com
Jpp des sorties à la con de *Ph.D level expert* pour les IA.

Si il était niveau Ph.D, il regretterait ses choix, voudrait ouvrir un café sur les Alpes ou devenir éleveur de chèvres et ne jamais revivre ça et la santé mentale qui va avec le Ph.D.

(bon j'aime ce que je fais hein, blague).
Reposted by Matteo Paloni
dereklowe.bsky.social
New computational methods conjure up ligands for the famously difficult class of disordered proteins - but how far can they go?
Disordered Proteins Brought Into Line
www.science.org
Reposted by Matteo Paloni
mikesenters.bsky.social
Divine Right of Kings bullshit. This is as Un-American as it gets.
atrupar.com
Mike Johnson: "God miraculously saved the president's life -- I think it's undeniable -- and he did it for an obvious purpose. His presidency and his life are the fruits of divine providence. He points that out all the time and he's right to do so."
Reposted by Matteo Paloni
raistolo.bsky.social
This is actually important. Even after the test, people had the IMPRESSION of having been helped, when they had actually been impaired.
metr.org
METR @metr.org · Jul 10
At the beginning of the study, developers forecasted that they would get sped up by 24%. After actually doing the work, they estimated that they had been sped up by 20%. But it turned out that they were actually slowed down by 19%.
Reposted by Matteo Paloni
davidho.bsky.social
If people think American scientists are somehow going to land in Europe, I've got news for you about the difference between millions and billions.

www.nature.com/articles/d41...
Bar chart titled "Matters of Scale" comparing proposed US research budget cuts to the European Union's €500-million (US$571-million) "Choose Europe" fund. The chart shows:

* National Institutes of Health (NIH): $8 billion in cancelled grants and $18 billion in proposed cuts by 2026 (long orange bar).
* National Science Foundation (NSF): $5.1 billion in proposed cuts by 2026 (shorter orange bar).
* EU's Choose Europe fund: $571 million (very short blue bar).

The graphic highlights that the EU fund is much smaller in scale compared to the US budget cuts. Text above the chart explains the EU’s intention to attract US researchers in response to policy decisions by the convicted felon and rapist Donald Trump.
Reposted by Matteo Paloni
subfossilguy.bsky.social
Magnitude of current heatwave is incredible!

The forecast for tuesday shows an unprecedented area with temp. > 40°C 😱

Never before has it been so hot over such a large area (and so early!) 🔥

The vertigo of having to live like this for the rest of your life...

Data @meteofrance.com