Job Description:
• PhD in a relevant STEM field, or Master’s with equivalent industry experience in robotics, robot learning, or embodied AI.
• Proven experience building and deploying machine learning models on robotic systems—including training, evaluation, and real-world execution or simulation.
• Deep understanding of modern AI architectures (e.g., Transformers, diffusion models, VLM/VLAs, CNNs) with strong experience training models at scale.
• Strong PyTorch implementation skills, including authoring custom modules, batching, debugging, and performance optimization.
• Practical experience with ROS/ROS2 and integrating learned policies into manipulation or motion control workflows.
• Demonstrated impact via robot learning publications, open-source contributions, or production robotics deployments.
Roles & Responsibilities
• Design and implement advanced robot learning architectures (e.g., diffusion policies, ACT, VLM/VLA-guided agents, imitation learning) to support dexterous manipulation, path planning, and autonomous task sequencing.
• Develop end-to-end policy training pipelines, integrating multi-modal sensory data (RGB, depth, proprioception, force/torque, LiDAR, tactile inputs) with control outputs.
• Build policy inference and closed-loop control that connect perception, planning, and execution on physical robotic platforms.
• Apply and extend large-scale architectures—LLMs, VLM/VLAs, diffusion models—to embodied tasks, grounding, and sim-to-real adaptation.
• Collaborate with cross-functional teams to deploy robot policies on hardware, ensuring robustness, repeatability, and safety.
• Lead data strategy for demonstrations, teleoperation, simulation pipelines, and evaluation frameworks for manipulation policies.
• Stay current with embodied AI research and share insights internally through discussion, mentorship, and technical presentations