NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It's a unique legacy of innovation that's fueled by great technology-and amazing people. Today, we're tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what's never been done before takes vision, innovation, and the world's best talent. As an NVIDIAN, you'll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
NVIDIA is building the next generation of AI systems that can perceive, reason about, and generate dynamic worlds. Our team advances world foundation models to enable high-fidelity, temporally stable video and world generation for Physical AI, simulation, and interactive experiences. This role operates at the applied-research boundary: developing and validating model improvements, then hardening them into production-grade checkpoints and recipes that teams can reliably build on. The technical focus is human appearance, motion, and interaction, where identity drift, temporal artifacts, and inconsistent contact dynamics often limit real-world usability. Progress is measured through disciplined experimentation, robust diagnostics, and repeatable side-by-side evaluation. Work is delivered in close partnership with data, platform, and product engineering to ensure improvements translate into real-time performance and user-visible quality.
What you'll be doing:- Research, implement, and validate model architecture and algorithm changes that improve video generation fidelity, with emphasis on human-centric quality (identity preservation, anatomy, motion coherence, and interaction realism).
- Explore and prototype improvements across spatial multimodal modeling, modality alignment, flow-based or diffusion-based video generation, and neural rendering-inspired representations to improve controllability and long-horizon consistency.
- Improve training and inference efficiency through architectural and post-training techniques (compute/memory optimizations, distillation, pruning, and compression).
- Define model training objectives that improve sim-to-real and real-to-sim generalization, especially for human motion, contact, and interaction dynamics across real-world and synthetic/simulation data.
- Develop detailed, domain-specific benchmarks for evaluating world foundation models, especially generation and understanding world models that reason about video, simulation, and physical environments.
- Translate research results into robust implementations like training code, production-grade checkpoints, model integrations, and demos that clearly showcase capability gains across teams.
What we need to see:- PhD in Computer Science, Graphics, Computer Engineering, or a closely related field (or equivalent experience).
- 8+ years of applied research and/or industry experience in vision, graphics, or adjacent ML domains (or equivalent experience).
- 4+ years of direct experience designing, training, and evaluating generative models for image/video/audio, with strong fundamentals in modern deep learning.
- Hands-on experience improving generative models with a focus on perceptual quality and temporal stability, especially for generating humans.
- Advanced proficiency in Python, PyTorch, C++, and CUDA with strong research-engineering practices (reproducibility, testing, profiling, experiment tracking).
- Experience training and debugging large models in multi-GPU and/or multi-node environments and distributed training workflows
- Practical knowledge of inference/runtime bottlenecks and optimization techniques (e.g., batching, parallelism strategies, low-precision/quantization awareness, attention/KV-cache efficiency)
- Strong "eye for quality" and interest in diagnosing visual artifacts (sharpness, texture detail, temporal stability, etc.) using perceptual metrics, human preference signals, or learned evaluators.
Ways to stand out from the crowd:- Proven track record in related research, including publications in top conferences (e.g., NeurIPS, CVPR, ICLR), with clear evidence of impact on model quality or robustness.
- Exposure to closed-loop training setups (e.g., reinforcement learning or preference-based optimization) for improving controllability, stability, and interaction quality in generated sequences.
Widely considered to be one of the technology world's most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family ;br>
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until January 28, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.