Senior Deep Learning Software Engineer, Inference and Model Optimization

Overview

Remote
On Site
USD 180,000.00 per year
Full Time

Skills

Large Language Models (LLMs)
Software architecture
Generative Artificial Intelligence (AI)
Torch
Performance tuning
Caching
Collaboration
Computer hardware
Code optimization
Leadership
Pivotal
User experience
Computer science
Mathematics
Research
Software design
Performance analysis
Quality assurance
Python
Algorithms
Communication
PyTorch
JAX
Debugging
Deep learning
Writing
GPU
Machine Learning (ML)
CUDA
Artificial intelligence
Optimization
Real-time
Recruiting
Promotions
SAP BASIS
Law

Job Details

NVIDIA is at the forefront of the generative AI revolution! The Algorithmic Model Optimization Team specifically focuses on optimizing generative AI models such as large language models (LLM) and diffusion models for maximal inference efficiency using techniques ranging from neural architecture search and pruning to sparsity, quantization, and automated deployment strategies. Our work includes conducting applied research to improve model efficiency as well as developing an innovative software platform (TRT Model Optimizer). Our software is used both internally across NVIDIA and externally by research and engineering teams alike developing best-in-class AI models.

We are now looking for a Senior Deep Learning Software Engineer to develop and scale up our automated inference and deployment solution. As part of the team, you will be instrumental in pushing the limits of inference efficiency and large-scale, automated deployment. Your work will touch upon fundamental aspects of a typical machine learning stack including working in high-level frameworks like PyTorch and HuggingFace to developing and improving high-performance kernel implementations in CUDA, TRT-LLM, and Triton. This is an exceptional opportunity for passionate software engineers straddling the boundaries of research and engineering, with a strong background in both machine learning fundamentals and software architecture & engineering.

What you'll be doing:
  • Train, develop, and deploy state-of-the generative AI models like LLMs and diffusion models using NVIDIA's AI software stack.
  • Leverage and build upon the torch 2.0 ecosystem (TorchDynamo, torch.export, torch.compile, etc...) to analyze and extract standardized model graph representation from arbitrary torch models for our automated deployment solution.
  • Develop high-performance optimization techniques for inference, such as automated model sharding techniques (e.g. tensor parallelism, sequence parallelism), efficient attention kernels with kv-caching, and more.
  • Collaborate with teams across NVIDIA to use performant kernel implementations within our automated deployment solution.
  • Analyze and profile GPU kernel-level performance to identify hardware and software optimization opportunities.
  • Continuously innovate on the inference performance to ensure NVIDIA's inference software solutions (TRT, TRT-LLM, TRT Model Optimizer) can maintain and increase its leadership in the market.
  • Play a pivotal role in architecting and designing a modular and scalable software platform to provide an excellent user experience with broad model support and optimization techniques to increase adoption.

What we need to see:
  • Masters, PhD, or equivalent experience in Computer Science, AI, Applied Math, or related field.
  • 5+ years of relevant work or research experience in Deep Learning.
  • Excellent software design skills, including debugging, performance analysis, and test design.
  • Strong proficiency in Python, PyTorch, and related ML tools (e.g. HuggingFace).
  • Strong algorithms and programming fundamentals.
  • Good written and verbal communication skills and the ability to work independently and collaboratively in a fast-paced environment.

Ways to stand out from the crowd:
  • Contributions to PyTorch, JAX, or other Machine Learning Frameworks.
  • Knowledge of GPU architecture and compilation stack, and capability of understanding and debugging end-to-end performance.
  • Familiarity with NVIDIA's deep learning SDKs such as TensorRT.
  • Prior experience in writing high-performance GPU kernels for machine learning workloads in frameworks such as CUDA, CUTLASS, or Triton.

Increasingly known as "the AI computing company" and widely considered to be one of the technology world's most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. Are you creative, motivated, and love a challenge? If so, we want to hear from you! Come, join our model optimization group, where you can help build real-time, cost-effective computing platforms driving our success in this exciting and rapidly-growing field.

The base salary range is 180,000 USD - 339,250 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.