Senior AI-HPC Storage Engineer

Overview

On Site
USD 184,000.00 per year
Full Time

Skills

Computer Graphics
Parallel Computing
GPU
Leadership
High Performance Computing
IaaS
Modeling
Research
Management
Collaboration
Testing
Performance Analysis
Root Cause Analysis
Corrective And Preventive Action
Computer Science
Electrical Engineering
Performance Tuning
Distributed File System
IBM GPFS
CentOS
Red Hat Enterprise Linux
Ubuntu
Linux
Python
Bash
Scripting
Storage
Cloud Computing
Amazon Web Services
Microsoft Azure
Google Cloud
Google Cloud Platform
LSF
Docker
Workflow
MPI
CUDA
Benchmarking
Machine Learning (ML)
Algorithms
InfiniBand
Remote Direct Memory Access
Artificial Intelligence
HPC
Computer Networking
Deep Learning
PyTorch
TensorFlow
Recruiting
Promotions
SAP BASIS
Law

Job Details

NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI - the next era of computing. NVIDIA is a "learning machine" that constantly evolves by adapting to new opportunities that are hard to solve, that only we can address, and that matter to the world. This is our life's work, to amplify human creativity and intelligence. Make the choice to join us today!

As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of ground breaking fast storage solutions to enable runs of demanding deep learning, high performance computing, and computationally intensive workloads. We seek an expert to identify architectural changes encompassing file, block, and object storage, to cater to the requirements of an expanding cloud infrastructure. As an expert, you will help us with the next-gen storage solutions strategic challenges we encounter with storage design for large scale, high performance workloads, evolving our private/public cloud strategy, capacity modelling, and growth planning across our global computing environment.

What you'll be doing:
  • Research and implementation of distributed storage services.
  • Design, implement an on-prem AI/HPC infrastructure supplemented with cloud computing to support the growing needs of NVIDIA.
  • Design and implement scalable and efficient next-gen storage solutions tailored for data-intensive applications, optimizing performance and cost-effectiveness.
  • Develop tooling to automate management of large-scale infrastructure environments, to automate operational monitoring and alerting, and to enable self-service consumption of resources.
  • Document the general procedures and practices, perform technology evaluations, related to distributed file systems.
  • Collaborate across teams to better understand developers' workflows and gather their infrastructure requirements.
  • Influence and guide methodologies for building, testing, and deploying applications to ensure optimal performance and resource utilization.
  • Supporting our researchers to run their flows on our clusters including performance analysis and optimizations of deep learning workflows
  • Root cause analysis and suggest corrective action for problems large and small scales

What we need to see:
  • Bachelor's degree in Computer Science, Electrical Engineering or related field or equivalent experience.
  • 8+ years of experience designing and operating large scale storage infrastructure.
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads.
  • Experience with one or more parallel or distributed filesystems such as Lustre, GPFS is a must.
  • Proficient in Centos/RHEL and/or Ubuntu Linux distros including Python programming and bash scripting
  • Experience architecture design and operation of storage solutions on any of the leading Cloud environment [AWS, Azure or Google Cloud Platform]
  • Experience with AI/HPC cluster job schedulers such as SLURM, LSF
  • In depth understating of container technologies like Docker, Enroot
  • Experience with AI/HPC workflows that use MPI

Ways to stand out from the crowd:
  • Experience with NVIDIA GPUs, Cuda Programming, NCCL and MLPerf benchmarking
  • Experience with Machine Learning and Deep Learning concepts, algorithms and models
  • Familiarity with InfiniBand with IBOIP and RDMA
  • Background with Software Defined Networking and AI/HPC cluster networking
  • Familiarity with deep learning frameworks like PyTorch and TensorFlow

NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most resourceful and talented people in the world working for us and, due to unprecedented growth, our extraordinary engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you.

The base salary range is 184,000 USD - 356,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.