Machine Learning Engineer (with CUDA)

Overview

Remote
$85 - $95
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)
10% Travel

Skills

Python
Machine Learning
Cuda
ML
GPU
PTX
SASS
warps
cooperative groups
Tensor Cores
CUDA GDB
NSight Systems
NSight Compute
Triton
CUTLASS
CUB
Thrust
cuDNN
cuBLAS
CUDA graph launch
tensor core arithmetic
warp-level synchronization
Infiniband
RoCE
GPUDirect
PXN
rail optimization
NVLink
NCCL
MPI
Machine Learning (ML)
Research
Presentations

Job Details

Machine Learning Engineer - with CUDA Python
Remote with Travel
Contract position
 
About this position:
  • We need a very strong technical CUDA Python ML engineer. They can sit anywhere in the US, but must be willing to travel 30% of the time. They will be very client-facing, so professionalism and presentation skills are key to this role.
  • Your part here is optimizing the performance of our models both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems, and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking, and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense, even at the lowest level is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long?

Responsibilities:

  • An understanding of modern ML techniques and toolsets
  • The experience and systems knowledge required to debug a training run's performance end-to-end.
  • Low-level GPU knowledge of PTX, SASS, warps, cooperative groups, Tensor Cores, and the memory hierarchy
  • Debugging and optimization experience using tools like CUDA GDB, NSight Systems, NSight Compute
  • Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN, and cuBLAS
  • Intuition about the latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronization, and asynchronous memory loads
  • Background in Infiniband, RoCE, GPUDirect, PXN, rail optimization, and NVLink, and how to use these networking technologies to link up GPU clusters
  • An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
  • An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.