Overview
On Site
USD 184,000.00 per year
Full Time
Skills
Pivotal
Facilitation
Research
GPU
Cost Reduction
Build Tools
Productivity
Robotics
Computer Hardware
Computer Science
Optimization
Debugging
Training
Scripting
Python
Bash
Cloud Computing
Amazon Web Services
Google Cloud Platform
Google Cloud
Microsoft Azure
Parallel Computing
Communication
Collaboration
CUDA
Benchmarking
Machine Learning (ML)
Algorithms
InfiniBand
Remote Direct Memory Access
Data Storage
IBM GPFS
Artificial Intelligence
HPC
Deep Learning
PyTorch
TensorFlow
Recruiting
Promotions
SAP BASIS
Law
Job Details
We are seeking a Senior AI/ML Performance and Efficiency Engineer, GPU Clusters at NVIDIA to join our AI Efficiency efforts. As an Engineer, you will have a pivotal role in enhancing efficiency for our researchers by implementing progressions throughout the entire stack. Your main task will revolve around collaborating closely with customers to pinpoint and address infrastructure and application deficiencies, facilitating groundbreaking AI and ML research on GPU Clusters. Together, we can craft potent, effective, and scalable solutions as we mold the future of AI/ML technology!
What you will be doing:
What we need to see:
Ways to stand out from the crowd:
NVIDIA offers competitive salaries and a comprehensive benefits package. Our engineering teams are growing rapidly due to outstanding expansion. If you're a passionate and independent engineer with a love for technology, we want to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until November 8, 2025.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
What you will be doing:
- Collaborate closely with our AI/ML researchers to make their ML models more efficient leading to significant productivity improvements and cost savings
- Build tools, frameworks, and apply ML techniques to detect & analyze efficiency bottlenecks and deliver productivity improvements for our researchers
- Work with researchers working on a variety of innovative ML workloads across Robotics, Autonomous vehicles, LLM's, Videos and more
- Collaborate across the engineering organizations to deliver efficiency in our usage of hardware, software, and infrastructure
- Proactively monitor fleet wide utilization patterns, analyze existing inefficiency patterns, or discover new patterns, and deliver scalable solutions to solve them
- Keep up to date with the most recent developments in AI/ML technologies, frameworks, and successful strategies, and advocate for their integration within the organization.
What we need to see:
- BS or similar background in Computer Science or related area (or equivalent experience)
- Minimum 8+ years of experience designing and operating large scale compute infrastructure
- Strong understanding of modern ML techniques and tools
- Experience investigating, and resolving, training & inference performance end to end
- Debugging and optimization experience with NSight Systems and NSight Compute
- Experience with debugging large-scale distributed training using NCCL
- Proficiency in programming & scripting languages such as Python, Go, Bash, as well as familiarity with cloud computing platforms (e.g., AWS, Google Cloud Platform, Azure) in addition to experience with parallel computing frameworks and paradigms.
- Dedication to ongoing learning and staying updated on new technologies and innovative methods in the AI/ML infrastructure sector.
- Excellent communication and collaboration skills, with the ability to work effectively with teams and individuals of different backgrounds
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
- Experience with Machine Learning and Deep Learning concepts, algorithms and models
- Familiarity with InfiniBand with IBOP and RDMA
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads
- Familiarity with deep learning frameworks like PyTorch and TensorFlow
NVIDIA offers competitive salaries and a comprehensive benefits package. Our engineering teams are growing rapidly due to outstanding expansion. If you're a passionate and independent engineer with a love for technology, we want to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until November 8, 2025.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.