An LLM-powered plugin designed to simplify and accelerate workflow creation in ComfyUI, an open-source AI art platform, by providing intelligent node/model recommendations and automated workflow generation.
An LLM-powered plugin designed to simplify and accelerate workflow creation in ComfyUI, an open-source AI art platform, by providing intelligent node/model recommendations and automated workflow generation.
AReaL is an asynchronous reinforcement learning system that efficiently trains large language models for reasoning tasks by maximizing GPU usage and decoupling generation from training.
AReaL is an asynchronous reinforcement learning system that efficiently trains large language models for reasoning tasks by maximizing GPU usage and decoupling generation from training.
Pseudo-simulation is a new evaluation paradigm for autonomous vehicles that blends the realism of real-world data with the generalization power of simulation, enabling robust, scalable testing without the need for interactive environments.
Pseudo-simulation is a new evaluation paradigm for autonomous vehicles that blends the realism of real-world data with the generalization power of simulation, enabling robust, scalable testing without the need for interactive environments.
SmolVLA is a compact, open-source VLA model built for low-cost training and real-world deployment on consumer hardware, enabling efficient language-driven robot control without sacrificing performance.
SmolVLA is a compact, open-source VLA model built for low-cost training and real-world deployment on consumer hardware, enabling efficient language-driven robot control without sacrificing performance.
This paper introduces ProRL, a method that uses long-horizon reinforcement learning to unlock new reasoning strategies in LLMs—strategies that base models cannot access, even with extensive sampling.
This paper introduces ProRL, a method that uses long-horizon reinforcement learning to unlock new reasoning strategies in LLMs—strategies that base models cannot access, even with extensive sampling.
This paper shows that punishing wrong answers—without explicitly rewarding the correct—can be surprisingly effective for improving reasoning in large language models trained via reinforcement learning with verifiable rewards.
This paper shows that punishing wrong answers—without explicitly rewarding the correct—can be surprisingly effective for improving reasoning in large language models trained via reinforcement learning with verifiable rewards.
This paper challenges the idea that Chain-of-Thought (CoT) prompting enables true reasoning in LLMs, arguing instead that CoT acts as a structural constraint that guides models to imitate the appearance of reasoning.
This paper challenges the idea that Chain-of-Thought (CoT) prompting enables true reasoning in LLMs, arguing instead that CoT acts as a structural constraint that guides models to imitate the appearance of reasoning.
This note investigates a sudden rise in gradient norms during the late stages of LLM training and identifies a surprising cause: the interplay between weight decay, normalization layers, and scheduled learning rate decay.
This note investigates a sudden rise in gradient norms during the late stages of LLM training and identifies a surprising cause: the interplay between weight decay, normalization layers, and scheduled learning rate decay.
This paper proves that any general agent capable of reliably completing diverse, goal-directed tasks must implicitly learn a predictive model of its environment—challenging the notion that model-free learning is sufficient for general intelligence.
This paper proves that any general agent capable of reliably completing diverse, goal-directed tasks must implicitly learn a predictive model of its environment—challenging the notion that model-free learning is sufficient for general intelligence.
This paper investigates how a small subset of high-entropy tokens—termed "forking tokens"—drives the performance of reinforcement learning with verifiable rewards (RLVR) in reasoning tasks for large language models.
This paper investigates how a small subset of high-entropy tokens—termed "forking tokens"—drives the performance of reinforcement learning with verifiable rewards (RLVR) in reasoning tasks for large language models.
Check out the top 10 papers for the week👇
- Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning
Check out the top 10 papers for the week👇
- Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning
HiDream-I1 is a 17B-parameter open-source image generation model using a novel sparse Diffusion Transformer (DiT) with dynamic Mixture-of-Experts (MoE) to deliver state-of-the-art image quality in seconds while reducing computation.
HiDream-I1 is a 17B-parameter open-source image generation model using a novel sparse Diffusion Transformer (DiT) with dynamic Mixture-of-Experts (MoE) to deliver state-of-the-art image quality in seconds while reducing computation.
This work introduces the Catfish Agent, a specialized large language model designed to disrupt premature consensus—called Silent Agreement—in multi-agent clinical decision-making systems by injecting structured dissent to improve diagnostic accuracy.
This work introduces the Catfish Agent, a specialized large language model designed to disrupt premature consensus—called Silent Agreement—in multi-agent clinical decision-making systems by injecting structured dissent to improve diagnostic accuracy.
WebDancer is an end-to-end autonomous web agent designed for complex, multi-step information seeking. It combines a data-centric and training-stage pipeline to enable robust reasoning and decision-making in real-world web environments.
WebDancer is an end-to-end autonomous web agent designed for complex, multi-step information seeking. It combines a data-centric and training-stage pipeline to enable robust reasoning and decision-making in real-world web environments.
The authors introduce AIOS 1.0, a platform that helps language models better understand and interact with computers by transforming them into contextual environments. Built on this, LiteCUA is a lightweight agent that uses this structured context to perform digital tasks.
The authors introduce AIOS 1.0, a platform that helps language models better understand and interact with computers by transforming them into contextual environments. Built on this, LiteCUA is a lightweight agent that uses this structured context to perform digital tasks.
The Darwin Gödel Machine (DGM) is a self-improving AI system that rewrites its own code to enhance coding performance. Inspired by Gödel machines and Darwinian evolution, it uses empirical validation and an archive of past agents to drive open-ended, recursive improvement.
The Darwin Gödel Machine (DGM) is a self-improving AI system that rewrites its own code to enhance coding performance. Inspired by Gödel machines and Darwinian evolution, it uses empirical validation and an archive of past agents to drive open-ended, recursive improvement.
This paper analyzes a fundamental barrier in reinforcement learning (RL) for large language models (LLMs): the sharp early collapse of policy entropy, which limits exploration and caps downstream performance.
This paper analyzes a fundamental barrier in reinforcement learning (RL) for large language models (LLMs): the sharp early collapse of policy entropy, which limits exploration and caps downstream performance.
AgriFM is a multi-source temporal remote sensing foundation model tailored for crop mapping. It introduces a modified Video Swin Transformer backbone for unified spatiotemporal processing of satellite imagery from MODIS, Landsat-8/9, and Sentinel-2.
AgriFM is a multi-source temporal remote sensing foundation model tailored for crop mapping. It introduces a modified Video Swin Transformer backbone for unified spatiotemporal processing of satellite imagery from MODIS, Landsat-8/9, and Sentinel-2.
This paper introduces WorldEval, a real-to-video evaluation framework that uses world models to assess real-world robot manipulation policies in a scalable, safe, and reproducible way. It avoids costly real-world evaluations by simulating robot actions via generated videos.
This paper introduces WorldEval, a real-to-video evaluation framework that uses world models to assess real-world robot manipulation policies in a scalable, safe, and reproducible way. It avoids costly real-world evaluations by simulating robot actions via generated videos.
This paper introduces RLIF, a paradigm where LLMs improve reasoning using intrinsic signals instead of external rewards. The authors propose INTUITOR, which uses a model’s self-confidence—measured as self-certainty—as the sole reward signal.
This paper introduces RLIF, a paradigm where LLMs improve reasoning using intrinsic signals instead of external rewards. The authors propose INTUITOR, which uses a model’s self-confidence—measured as self-certainty—as the sole reward signal.
The paper introduces Paper2Poster, the first benchmark for automated academic poster generation, and PosterAgent, a visual-in-the-loop multi-agent system that converts research papers into high-quality posters using open-source models.
The paper introduces Paper2Poster, the first benchmark for automated academic poster generation, and PosterAgent, a visual-in-the-loop multi-agent system that converts research papers into high-quality posters using open-source models.
Check out the top 10 papers for the week👇
- Paper2Poster: Towards Multimodal Poster Automation from Scientific Papers
Check out the top 10 papers for the week👇
- Paper2Poster: Towards Multimodal Poster Automation from Scientific Papers
This paper shows that hidden states (embeddings) of large language models (LLMs) contain rich economic information that can be used to estimate and impute economic and financial statistics more accurately than the LLMs’ text outputs.
This paper shows that hidden states (embeddings) of large language models (LLMs) contain rich economic information that can be used to estimate and impute economic and financial statistics more accurately than the LLMs’ text outputs.
LightLab presents a diffusion-based method for precise, parametric control over light sources in a single image, enabling users to edit light intensity and color with photorealistic results. The approach fine-tunes a diffusion model on real paired-photo and synthetically rendered data.
LightLab presents a diffusion-based method for precise, parametric control over light sources in a single image, enabling users to edit light intensity and color with photorealistic results. The approach fine-tunes a diffusion model on real paired-photo and synthetically rendered data.
This paper introduces Maya, an open-source multilingual Vision-Language Model (VLM) designed to enhance performance on vision-language tasks across eight diverse languages. Maya addresses the underperformance of existing VLMs on low-resource languages and varied cultural contexts.
This paper introduces Maya, an open-source multilingual Vision-Language Model (VLM) designed to enhance performance on vision-language tasks across eight diverse languages. Maya addresses the underperformance of existing VLMs on low-resource languages and varied cultural contexts.