RoboPapers

Chris Paxton and Michael Cho

Chris Paxton & Michael Cho geek out over robotic papers with paper authors. robopapers.substack.com

  1. Ep#78: Three Eras of Robot Learning

    1 DAY AGO

    Ep#78: Three Eras of Robot Learning

    Robotics has changed dramatically over the last eight years. Ted has been involved in the cutting edge of robot learning through this period, spending those eight years at Google Brain/Google Deepmind. And he’s identified three eras of robot learning. These eras are: * The Era of Existence Proofs - trying different methods like QT-Opt, on-robot RL * The Era of Foundation Models - transitioning to data collection and clean objectives (i.e. supervised learning) * The Era of Scaling - orders of magnitude more data and larger models, enabling reasoning, long-horizon actions, and cross-embodiment transfer The only reason something succeeds is if everything goes right. Behavior cloning, for example, seemed stuck at 60-70% success rate on key tasks until his team rewrote their learning stack — at which point it hit 95-99%+ success rates. For most of those eight years, something was wrong. The stack wasn’t quite right, the learning algorithms were wrong, the data didn’t exist. Hardware and operations are not mature enough. But they kept working on these problems, over and over, until finally they have arrived at amazing breakthrough. Some key trends now: * Reasoning models for robotics * Long-horizon, precision-oriented tasks, like making coffee from Physical Intelligence or GPU assembly from Skild * Cross-embodiment transfer * Hardware and model co-design * Results are nice, but capabilities are even more — and academics are going to have trouble keeping up with compute and resources available to companies Watch Episode 78 of RoboPapers, with Michael Cho and Jiafei Duan, to learn more! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com

    1hr 11min
  2. Ep#77: DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos

    29 APR

    Ep#77: DreamDojo: A Generalist Robot World Model from Large-Scale Human Videos

    World models have many different uses, from evaluation to training data generation to robot planning. DreamDojo is a new foundation world model that allows for impressively general and long-horizon interaction, generating coherent videos for interaction sequences over a minute long. It works in a wide range of environments and even generalizes to previously-unseen environments. We talked to Shenyuan Gao and William Liang about how they built DreamDojo, and about what tricks were necessary to scale world model learning on data with sparse action labels, pretraining on 44,000 hours of human data and adapting to a wide variety of robots, environments, and skills. Watch Epsiode #77 of RoboPapers with Michael Cho and Chris Paxton now to learn more! Abstract Being able to simulate the outcomes of actions in varied environments will revolutionize the development of generalist agents at scale. However, modeling these world dynamics, especially for dexterous robotics tasks, poses significant challenges due to limited data coverage and scarce action labels. As an endeavor towards this end, we introduce DreamDojo, a foundation world model that learns diverse interactions and dexterous controls from 44k hours of egocentric human videos. Our data mixture represents the largest video dataset to date for world model pretraining, spanning a wide range of daily scenarios with diverse objects and skills. To address the scarcity of action labels, we introduce continuous latent actions as unified proxy actions, enhancing interaction knowledge transfer from unlabeled videos. After post-training on small-scale target robot data, DreamDojo demonstrates a strong understanding of physics and precise action controllability. We also devise a distillation pipeline that accelerates DreamDojo to a real-time speed of 10.81 FPS and further improves context consistency. Our work enables several important applications based on generative world models, including live teleoperation, policy evaluation, and model-based planning. Systematic evaluation on multiple challenging out-of-distribution (OOD) benchmarks verifies the significance of our method for simulating open-world, contact-rich tasks, paving the way for general-purpose robot world models. Learn More Project Page: https://dreamdojo-world.github.io/ ArXiV: https://arxiv.org/abs/2602.06949 Github: https://github.com/NVIDIA/DreamDojo Original thread on X This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com

    1hr 3min
  3. 27 APR

    Ep#76: OmniXtreme: Breaking the Generality Barrier in High-Dynamic Humanoid Control

    We’ve seen lots of incredible videos of humanoid robots dancing, doing martial arts, running up walls — but these extreme behaviors are usually from individual, highly specialized policies. But now OmniXtreme shows us how to achieve incredible behaviors that push the limits of humanoid motion, by (1) training a flow-based motion generative model, and (2) doing residual RL post-training to handle complex real-world dynamics. Yunsheng Wang and Shaohang Zhu join us to talk about their work towards general-purpose high performance humanoid robot control. Watch Episode #76 of RoboPapers, with Michael Cho and Jiafei Duan, now! Abstract High-fidelity motion tracking serves as the ultimate litmus test for generalizable, human-level motor skills. However, current policies often hit a "generality barrier": as motion libraries scale in diversity, tracking fidelity inevitably collapses - especially for real-world deployment of high-dynamic motions. We identify this failure as the result of two compounding factors: the learning bottleneck in scaling multi-motion optimization and the physical executability constraints that arise in real-world actuation. To overcome these challenges, we introduce OmniXtreme, a scalable framework that decouples general motor skill learning from sim-to-real physical skill refinement. Our approach uses a flow-matching policy with high-capacity architectures to scale representation capacity without interference-intensive multi-motion RL optimization, followed by an actuation-aware refinement phase that ensures robust performance on physical hardware. Extensive experiments demonstrate that OmniXtreme maintains high-fidelity tracking across diverse, high-difficulty datasets. On real robots, the unified policy successfully executes multiple extreme motions, effectively breaking the long-standing fidelity-scalability trade-off in high-dynamic humanoid control. Learn More Project Page: https://extreme-humanoid.github.io/ Github: https://github.com/Perkins729/OmniXtreme ArXiV: https://arxiv.org/abs/2602.23843 Original thread on X: This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com

    48 min
  4. Ep#75: TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics

    23 APR

    Ep#75: TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics

    Reinforcement on robots is highly limited by our ability to design good reward functions; this means that designing strong, generalizable reward functions is a key enabler to progress on real-world reinforcement learning. But we already have a very general class of models: VLMs. Wouldn’t it be great if you could just use a VLM to generate rewards, then? TOPReward directly generates rewards from the probability of the “True” token of a VLM question-answering response; this makes it easy to implement, incredibly general, and surprisingly powerful. We talked to Shirui Chen and Cole Harrison to learn more. Watch Episode#75 of RoboPapers now to learn more, with Chris Paxton and Jiafei Duan! Abstract While Vision-Language-Action (VLA) models have seen rapid progress in pretraining, their advancement in Reinforcement Learning (RL) remains hampered by low sample efficiency and sparse rewards in real-world settings. Developing generalizable process reward models is essential for providing the fine-grained feedback necessary to bridge this gap, yet existing temporal value functions often fail to generalize beyond their training domains. We introduce TOPReward, a novel, probabilistically grounded temporal value function that leverages the latent world knowledge of pretrained video Vision-Language Models (VLMs) to estimate robotic task progress. Unlike prior methods that prompt VLMs to directly output progress values, which are prone to numerical misrepresentation, TOPReward extracts task progress directly from the VLM's internal token logits. In zero-shot evaluations across 130+ distinct real-world tasks and multiple robot platforms (e.g., Franka, YAM, SO-100/101), TOPReward achieves 0.947 mean Value-Order Correlation (VOC) on Qwen3-VL, dramatically outperforming the state-of-the-art GVL baseline which achieves near-zero correlation on the same open-source model. We further demonstrate that TOPReward serves as a versatile tool for downstream applications, including success detection and reward-aligned behavior cloning. Learn More Project Page: https://topreward.github.io/webpage/ ArXiV: https://arxiv.org/abs/2602.19313 This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com

    1hr 1min
  5. Ep#73: VideoManip: Dexterous Manipulation Policies from RGB Human Videos via 3D Hand-Object Trajectory Reconstruction

    18 APR

    Ep#73: VideoManip: Dexterous Manipulation Policies from RGB Human Videos via 3D Hand-Object Trajectory Reconstruction

    Teaching robots to perform dexterous manipulation tasks currently requires teleoperation, which limits demonstration quality, speed, and scalability. Instead, why not use human videos? The problem is that a human hand isn’t a robot hand, so data must be retargeted using simulation to resolve issues like collisions and interpenetration when controlling the hand. In VideoManip, Hongyi Chen and co-authors built a system to solve this problem, taking in RGB videos of humans performing manipulation tasks and using them to create accurate simulations with which to learn robot policies. Watch episode #73 of RoboPapers, hosted by Michael Cho and Chris Paxton, now to learn more! Abstract Multi-finger robotic hand manipulation and grasping are challenging due to the high-dimensional action space and the difficulty of acquiring large-scale training data. Existing approaches largely rely on human teleoperation with wearable devices or specialized sensing equipment to capture hand-object interactions, which limits scalability. In this work, we propose VIDEOMANIP, a device-free framework that learns dexterous manipulation directly from RGB human videos. Leveraging recent advances in computer vision, VIDEOMANIP reconstructs explicit 3D robot-object trajectories from monocular videos by estimating human hand poses, object meshes, and retargets the reconstructed human motions to robotic hands for manipulation learning. To make the reconstructed robot data suitable for dexterous manipulation training, we introduce hand-object contact optimization with interaction-centric grasp modeling, as well as a demonstration synthesis strategy that generates diverse training trajectories from a single video, enabling generalizable policy learning without additional robot demonstrations. In simulation, the learned grasping model achieves a 70.25% success rate across 20 diverse objects using the Inspire Hand. In the real world, manipulation policies trained from RGB videos achieve an average 62.86% success rate across seven tasks using the LEAP Hand, outperforming retargeting-based methods by 15.87%. Project videos are available at this http URL. Learn More Project page: https://videomanip.github.io/ ArXiV: https://arxiv.org/abs/2602.09013 This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com

    44 min
  6. Ep#72: SONIC: Supersizing Motion Tracking for Natural Humanoid Whole-Body Control

    15 APR

    Ep#72: SONIC: Supersizing Motion Tracking for Natural Humanoid Whole-Body Control

    How can we build a general-purpose “foundation model” for robot motion? Zhengyi Luo joitns us to talk about SONIC, which uses motion tracking as a foundational task for humanoid robot control, and scales humanoid control training to 9k GPU hours and 100 million frames worth of data. The result: a model with a generally-useful embedding space that can be controlled by a VLA, or from human video, to perform a wide variety of humanoid whole-body-control tasks, including with zero-shot transfer to previously unseen motions. Watch episode 72 of RoboPapers, with Michael Cho and Jiafei Duan, now! Abstract Despite the rise of billion-parameter foundation models trained across thousands of GPUs, similar scaling gains have not been shown for humanoid control. Current neural controllers for humanoids remain modest in size, target a limited set of behaviors, and are trained on a handful of GPUs. We show that scaling model capacity, data, and compute yields a generalist humanoid controller capable of natural, robust whole-body movements. We position motion tracking as a scalable task for humanoid control, leveraging dense supervision from diverse motion-capture data to acquire human motion priors without manual reward engineering. We build a foundation model for motion tracking by scaling along three axes: network size (1.2M to 42M parameters), dataset volume (100M+ frames from 700 hours of motion capture), and compute (21k GPU hours). Beyond demonstrating the benefits of scale, we further show downstream utility through: (1) a real-time kinematic planner bridging motion tracking to tasks such as navigation, enabling natural and interactive control, and (2) a unified token space supporting VR teleoperation and vision-language-action (VLA) models with a single policy. Through this interface, we demonstrate autonomous VLA-driven whole-body loco-manipulation requiring coordinated hand and foot placement. Scaling motion tracking exhibits favorable properties: performance improves steadily with compute and data diversity, and learned policies generalize to unseen motions, establishing motion tracking at scale as a practical foundation for humanoid control. Learn More Project Page: https://nvlabs.github.io/GEAR-SONIC/ ArXiV: https://arxiv.org/abs/2511.07820 Paper PDF: https://nvlabs.github.io/GEAR-SONIC/static/pdf/sonic_paper.pdf This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com

    1 hr

About

Chris Paxton & Michael Cho geek out over robotic papers with paper authors. robopapers.substack.com

You Might Also Like