Gwen Cheni
banner
gwencheni.bsky.social
Gwen Cheni
@gwencheni.bsky.social
33 followers 21 following 79 posts
Building stealth AI+bio. Prev @KhoslaVentures @indbio @sosv🧬💻 @ucsf🌉 @jpmorgan @GoldmanSachs @yale @UChicago @LMU_Muenchen
Posts Media Videos Starter Packs
Reposted by Gwen Cheni
“The science of today is the technology of tomorrow.”
— Edward Teller
Emergent properties:

Thinking time steadily improved throughout the training process 😳
Uses Group Relative Policy Optimization (GRPO) instead of Proximal Policy Optimization (PPO): foregoes critic model same size as policy model, instead estimates baseline from group scores instead, using the average reward of multiple samples to reduce memory use.
The secret sauce is rewards: ground truth computed by hardcoded rules. Learned rewards can easily be hacked by RL.
In addition to open source, DeepSeek-R1 is significant because it’s complete reinforcement learning (RL), no supervised fine-tuning (SFT)(“cold start”). Reminiscent of AlphaZero (which mastered Go, Shogi, and Chess from scratch, without playing against human grandmasters).
DeepSeek-R1: pure reinforcement learning (RL), no supervised fine-tuning (SFT), no chain-of-thought (CoT) #1minPapers 🧵👇
13. Janet Woodcock (former FDA): potential to look at prospective studies for certain rare indications, instead of only randomized controlled trials.
12. Scott Gottlieb
@scottgottliebmd.bsky.social : 50% of oncology INDs at the FDA are from China.
11. Bob Nelson: market is provisionally open. If have strong shareholder base already and book ready, then market’s open. Biotech IPOs are funding events: ARCH doesn’t view IPOs as exits, will stay past IPO for 3–4yrs till clinical milestone.
10. Big pharmas acquiring model teams are rare (Prescient Design was one off), more partnerships, claim having their own teams.
9. Org matter in decision making. e.g. Merck organized into Research vs Development. J&J organized along indication areas. Do you invest on risk, or on inflection points?
8. Every pharma's interested in obesity, but also careful because already have 3 players, hard to differentiate.
7. Pharmas need to look over their shoulders prior to billion-dollar acquisitions in case generics come out of China in a few years with the same MOA. One pharma CEO, “we have to get the cost of R&D down to be competitive.”
6. Large deals tend to result in cost cuts, not topline growth rates, and this industry trades on topline growth rate. Bolt-ons and mega-billion dollar deals — barbell strategy — may be in 2025.
5. 2023 was a record M&A year, $130bn. 2024 was a digestion year: not horrible for the number of deals, but private deals because capital markets closed. Scale is imprt in pharma, drives how much R&D is allocated. Previous admin was against large deals. New admin not against.
4. IRA shifted focus to bigger cancers, may be here to stay. Biologics and small molecules timelines may not be aligned, 13 vs 9. Small molecules have challenges with tox, and only have 9yrs to recoup investment. There could hopefully be bipartisan support to even this 9 vs 13.
3. Saw a lot of fast following the last few years: 3–4 drugs on same MOA is hard to get a return. Do VCs shift to lower risk lower reward investments instead?
2. Last year’s IPOs, 80% are below water, thus capitalize your company such that you aren’t dependent on an IPO. Have optionality. Is M&A the goal? If you are taking a drug to market, you may not have other options but to IPO.
Takeaways from JPM Healthcare Conf 2025 #JPM2025 For having survived the past two years of biotech winter and current political uncertainties, the crowd was pretty cautiously optimistic for dealflow to recover. And yes, the word “agentic” AI should have been a drinking game.🧵👇
SLM as a process preference model (PPM) to predict reward labels for each reasoning step. Q-values can reliably distinguish positive (correct) steps from negative. Using preference pairs and pairwise ranking loss, instead of direct Q-values, eliminate the inherently noise. 6/n
SLM samples candidate nodes, each generating CoT and corresponding Python code. Only nodes with successful execution are retained. MCTS automatically assign (self-annotate) a Q-value to each intermediate step based on its contribution: more trajectories=higher Q. 5/n
Process reward modeling (PRM) provides fine-grained feedback on intermediate steps because incorrect intermediate steps significantly decrease data quality in math. 4/n
Result: “4 rounds of self-evolution with millions of synthesized solutions for 747k math problems … it improves Qwen2.5-Math-7B from 58.8% to 90.0% and Phi3-mini-3.8B from 41.4% to 86.4%, surpassing o1-preview by +4.5% and +0.9%.” 3/n