Overview
Remote
On Site
USD 120,000.00 per year
Full Time
Skills
Computer Graphics
High Performance Computing
Generative Artificial Intelligence (AI)
Innovation
Collaboration
Problem Solving
Conflict Resolution
Linux Administration
Computer Networking
Storage
Research
Incident Management
Cloud Computing
Performance Analysis
GPU
SLA
Root Cause Analysis
Corrective And Preventive Action
Computer Science
Electrical Engineering
Management
LSF
CentOS
Red Hat Enterprise Linux
Ubuntu
Linux
Configuration Management
Ansible
Puppet
Docker
Python
Bash
Scripting
Emerging Technologies
CUDA
Benchmarking
Machine Learning (ML)
Algorithms
PyTorch
TensorFlow
InfiniBand
Remote Direct Memory Access
Data Storage
IBM GPFS
Artificial Intelligence
HPC
Workflow
MPI
Recruiting
Promotions
SAP BASIS
Law
Job Details
NVIDIA is a pioneer in accelerated computing, known for inventing the GPU and driving breakthroughs in gaming, computer graphics, high-performance computing, and artificial intelligence. Our technology powers everything from generative AI to autonomous systems, and we continue to shape the future of computing through innovation and collaboration. Within this mission, our team, Managed AI Superclusters (MARS) builds and scales the infrastructure, platforms, and tools that enable researchers and engineers to develop the next generation of AI/ML systems. By joining us, you'll help design solutions that power some of the world's most advanced computing workloads.
NVIDIA is looking for an AI/ML HPC Cluster Engineer to join our MARS team. You will provide technical engagement and problem solving on the management of large-scale HPC systems including the deployment of compute, networking, and storage. You will be working with a team of passionate and skilled engineers across NVIDIA that are continuously working to provide better tools to build and manage this infrastructure. Ideal candidate is strong in Linux administration, networking, storage, job schedulers, driving improvements, and has the ability to understand researcher computing needs.
What you'll be doing:
What we need to see:
Ways to stand out from the crowd:
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 120,000 USD - 189,750 USD.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until January 6, 2026.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
NVIDIA is looking for an AI/ML HPC Cluster Engineer to join our MARS team. You will provide technical engagement and problem solving on the management of large-scale HPC systems including the deployment of compute, networking, and storage. You will be working with a team of passionate and skilled engineers across NVIDIA that are continuously working to provide better tools to build and manage this infrastructure. Ideal candidate is strong in Linux administration, networking, storage, job schedulers, driving improvements, and has the ability to understand researcher computing needs.
What you'll be doing:
- Support day-to-day operations of production on-premises and multi-cloud AI/HPC clusters, ensuring system health, user satisfaction, and efficient resource utilization.
- Directly administer internal research clusters, conduct upgrades, incident response, and reliability improvements.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Maintain heterogeneous AI/ML clusters on-premises and in the cloud.
- Support our researchers to run their workloads including performance analysis and optimizations
- Analyze and optimize cluster efficiency, job fragmentation, and GPU waste to meet internal SLA targets.
- Support root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
- Triage and support postmortems for reliability incidents affecting users or infrastructure.
- Participate in a shared on-call rotation supported by strong automation, clear paths for responding to critical issues, and well-defined incident workflows.
What we need to see:
- Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience
- Minimum 2 years of experience administering multi-node compute infrastructure
- Background in managing AI/HPC job schedulers like Slurm, K8s, PBS, RTDA, BCM (formerly known as Bright), or LSF
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions
- Proven understanding of cluster configuration management tools (Ansible, Puppet, Salt, etc.), container technologies (Docker, Singularity, Podman, Shifter, Charliecloud), Python programming, and bash scripting.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
- Experience with AI/ML concepts, algorithms, models, and frameworks (PyTorch, Tensorflow)
- Experience with InfiniBand with IBOP and RDMA
- Understanding of fast, distributed storage systems such as Lustre and GPFS for AI/HPC workloads
- Applied knowledge in AI/HPC workflows that involve MPI
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 120,000 USD - 189,750 USD.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until January 6, 2026.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.