Overview
Remote
On Site
USD 184,000.00 - 287,500.00 per year
Full Time
Skills
Generative Artificial Intelligence (AI)
Benchmarking
Build Tools
Quality Assurance
Collaboration
User Experience
Usability
Optimization
Publications
Computer Science
Software Engineering
Software Development
Python
C++
Deep Learning
Algorithms
Parallel Computing
High Performance Computing
Communication
GPU
Problem Solving
Conflict Resolution
Debugging
Machine Learning (ML)
Torch
Cloud Computing
Amazon Web Services
Google Cloud
Google Cloud Platform
Microsoft Azure
Docker
Orchestration
DevOps
Continuous Integration
Continuous Delivery
Open Source
GitHub
Research and Development
Performance Tuning
Leadership
Artificial Intelligence
Research
PyTorch
CUDA
Training
Value Engineering
Kubernetes
Recruiting
Promotions
SAP BASIS
Law
Job Details
We are seeking highly skilled and motivated software engineers to join our vLLM & MLPerf team. You will define and build benchmarks for MLPerf Inference, the industry-leading benchmark suite for inference system-level performance, as well as contribute to vLLM and optimize its performance to the extreme for those benchmarks on NVIDIA's latest GPUs.
What you'll be doing:
What we need to see:
Ways to stand out from the crowd:
At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards.
If you've hacked the inner workings of PyTorch, or if you've written many CUDA/HIP kernels, or if you've developed and optimized inference services or training workloads, or if you've built and maintained large-scale Kubernetes clusters, or if you simply just enjoy solving hard problems, feel free to drop an application!
#LI-Hybrid
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until October 12, 2025.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What you'll be doing:
- Design and implement highly efficient inference systems for large-scale deployments of generative AI models.
- Define inference benchmarking methodologies and build tools that will be embraced across the industry.
- Develop, profile, debug, and optimize low-level system components and algorithms to enhance the throughput and the latency for the MLPerf Inference benchmarks on the newest NVIDIA GPUs.
- Productionize inference systems with uncompromised software quality.
- Collaborate with researchers and engineers to productionize trending model architectures, inference techniques and quantization methods.
- Contribute to the design of APIs, abstractions, and UX that make it easier to scale model deployment while maintaining usability and flexibility.
- Participate in design discussions, code reviews, and technical planning to ensure the product aligns with the business goals.
- Stay up to date with the latest advancements and come up with novel research ideas in inference system-level optimization, then translate research ideas into practical, robust systems. Explorations and academic publications are encouraged.
What we need to see:
- Bachelor's, Master's, or PhD degree in Computer Science/Engineering, Software Engineering, a related field, or equivalent experience.
- 5+ years of experience in software development, preferably with Python and C++.
- Deep understanding of deep learning algorithms, distributed systems, parallel computing, and high-performance computing principles.
- Hands-on experience with ML frameworks (e.g., PyTorch) and inference engines (e.g., vLLM and SGLang).
- Experience optimizing compute, memory, and communication performance for the deployments of large models.
- Familiarity with GPU programming, CUDA, NCCL, and performance profiling tools.
- Ability to work closely with both research and engineering teams, translating pioneering research ideas into concrete designs and robust code, as well as coming up with novel research ideas.
- Excellent problem-solving skills, with the ability to debug sophisticated systems.
- A passion for building high-impact software that pushes the boundaries of what's possible with large-scale AI.
Ways to stand out from the crowd:
- Background with building and optimizing LLM inference engines such as vLLM and SGLang.
- Experience building ML compilers such as Triton, Torch Dynamo/Inductor.
- Experience working with cloud platforms (e.g., AWS, Google Cloud Platform, or Azure), containerization tools (e.g., Docker), and orchestration infrastructures (e.g., Kubernetes, Slurm).
- Exposure to DevOps practices, CI/CD pipelines, and infrastructure as code.
- Contributions to open-source projects (please provide a list of the GitHub PRs you submitted).
At NVIDIA, we believe artificial intelligence (AI) will fundamentally transform how people live and work. Our mission is to advance AI research and development to create groundbreaking technologies that enable anyone to harness the power of AI and benefit from its potential. Our team consists of experts in AI, systems and performance optimization. Our leadership includes world-renowned experts in AI systems who have received multiple academic and industry research awards.
If you've hacked the inner workings of PyTorch, or if you've written many CUDA/HIP kernels, or if you've developed and optimized inference services or training workloads, or if you've built and maintained large-scale Kubernetes clusters, or if you simply just enjoy solving hard problems, feel free to drop an application!
#LI-Hybrid
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until October 12, 2025.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.