Human motion is instinctual. We know how to interact with the world around us, almost without thinking about it at all. Ziwen and Shaoting joined us on RoboPapers to talk about their ambitious Project Instinct: which provides the tools, algorithms, and environments necessary to build humanoid whole-body control which can handle contact with the environment. Watch Episode #64 of RoboPapers with Michael Cho and Jiafei Duan now! Abstract: We present a unified framework from algorithm, environment, dataset curation, and deployment for Instinct-Level intelligence on humanoid robots. Project Site: https://project-instinct.github.io/ Github for InstinctLab: https://github.com/project-instinct/instinctlab Embrace Collisions Perform contact-rich humanoid robot tasks like getting up from the ground. Abstract: Previous humanoid robot research works treat the robot as a bipedal mobile manipulation platform, where only the feet and hands contact the environment. However, we humans use all body parts to interact with the world, e.g., we sit in chairs, get up from the ground, or roll on the floor. Contacting the environment using body parts other than feet and hands brings significant challenges in both model-predictive control and reinforcement learning-based methods. An unpredictable contact sequence makes it almost impossible for model-predictive control to plan ahead in real-time. The success of the zero-shot sim-to-real reinforcement learning method for humanoids heavily depends on the acceleration of GPU-based rigid-body physical simulator and simplification of the collision detection. Lacking extreme torso movement of the humanoid research makes all other components non-trivial to design, such as termination conditions, motion commands and reward designs. To address these potential challenges, we propose a general humanoid motion framework that takes discrete motion commands and controls the robot's motor action in real-time. Using a GPU-accelerated rigid-body simulator, we train a humanoid whole-body control policy that follows the high-level motion command in the real world in real-time, even with stochastic contacts and extremely large robot base rotation and not-so-feasible motion command. Project Site: https://project-instinct.github.io/embrace-collisions/ ArXiV: https://arxiv.org/abs/2502.01465 Deep Whole-Body Parkour Current approaches to humanoid control generally fall into two paradigms: perceptive locomotion, which handles terrain well but is limited to pedal gaits, and general motion tracking, which reproduces complex skills but ignores environmental capabilities. This work unites these paradigms to achieve perceptive general motion control. We present a framework where exteroceptive sensing is integrated into whole-body motion tracking, permitting a humanoid to perform highly dynamic, non-locomotion tasks on uneven terrain. By training a single policy to perform multiple distinct motions across varied terrestrial features, we demonstrate the non-trivial benefit of integrating perception into the control loop. Our results show that this framework enables robust, highly dynamic multi-contact motions, such as vaulting and dive-rolling, on unstructured terrain, significantly expanding the robot's traversability beyond simple walking or running. this https URL Project Site: https://project-instinct.github.io/deep-whole-body-parkour/ ArXiV: https://arxiv.org/abs/2601.07701 Hiking in the Wild Achieving robust humanoid hiking in complex, unstructured environments requires transitioning from reactive proprioception to proactive perception. However, integrating exteroception remains a significant challenge: mapping-based methods suffer from state estimation drift. For instance, LiDAR-based methods do not handle torso jitter well. Existing end-to-end approaches often struggle with scalability and training complexity. Specifically, some previous works using virtual obstacles are implemented case-by-case. In this work, we present Hiking in the Wild, a scalable, end-to-end perceptive parkour framework designed for robust humanoid hiking. To ensure safety and training stability, we introduce two key mechanisms: a foothold safety mechanism combining scalable Terrain Edge Detection with Foot Volume Points to prevent catastrophic slippage on edges, and a Flat Patch Sampling strategy that eliminates reward hacking by generating feasible navigation targets. Our approach utilizes a single-stage reinforcement learning scheme, mapping raw depth inputs and proprioception directly to joint actions, without relying on external state estimation. Extensive field experiments on a full-size humanoid demonstrate that our policy enables robust traversal of complex terrains at speeds up to 2.5 m/s. The training and deployment code is open-sourced to facilitate reproducible research and deployment on real robots with minimal hardware modifications. Project Site: https://project-instinct.github.io/hiking-in-the-wild/ ArXiV: https://arxiv.org/abs/2601.07718 This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit robopapers.substack.com