Fellow GPU Performance Optimization Engineer

San Jose, CA, US • Posted 1 day ago • Updated 31 minutes ago
Full Time
On-site
USD 220,500.00 per year
Fitment

Dice Job Match Score™

👤 Reviewing your profile...

Job Details

Skills

  • Data Centers
  • Embedded Systems
  • Innovation
  • Management
  • Generative Artificial Intelligence (AI)
  • Performance Analysis
  • Technical Direction
  • Network
  • Scalability
  • Benchmarking
  • Modeling
  • Collaboration
  • Computer Hardware
  • Open Source
  • GPU
  • PCI Express
  • Remote Direct Memory Access
  • Training
  • Log Management
  • Communication
  • Machine Learning (ML)
  • PyTorch
  • JAX
  • TensorFlow
  • FOCUS
  • Performance Tuning
  • Python
  • C++
  • CUDA
  • Debugging
  • Stacks Blockchain
  • Optimization
  • IT Management
  • Computer Science
  • Computer Engineering
  • Machine Vision
  • Military
  • Law
  • Recruiting
  • Artificial Intelligence

Summary

WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

THE ROLE:

We are seeking a Fellow GPU Performance Optimization Engineer to join our Models and Applications team. This role focuses on maximizing performance and efficiency of large-scale AI training workloads on AMD GPU platforms. You will drive innovations across the full software-hardware stack, optimizing distributed training at scale and pushing the limits of system throughput, scalability, and utilization for generative AI workloads.

This position requires deep expertise in GPU performance analysis, distributed systems, and ML workloads, along with the ability to influence architecture, software ecosystems, and best practices across the organization.

THE PERSON:

The ideal candidate is a recognized technical leader with deep expertise in GPU performance optimization, large-scale distributed training, and system-level bottleneck analysis. You have a strong understanding of GPU architecture, interconnects, memory hierarchies, and communication patterns, and can translate this knowledge into measurable improvements in training efficiency at scale.

You are comfortable operating across layers-from kernels and runtimes to frameworks and distributed strategies-and have a track record of driving impactful optimizations and influencing technical direction.

KEY RESPONSIBILITIES:

- Lead performance optimization of large-scale AI training workloads on AMD GPU platforms across single-node and multi-node environments.

- Identify and eliminate system bottlenecks across compute, memory, and communication (e.g., kernel efficiency, memory bandwidth, network utilization).

- Optimize distributed training strategies (Data, Tensor, Pipeline Parallelism, ZeRO, etc.) for scalability and efficiency on AMD hardware.

- Drive cross-stack optimizations spanning kernels, compilers, runtimes, communication libraries, and ML frameworks.

- Develop and apply advanced profiling, benchmarking, and performance modeling methodologies.

- Collaborate with hardware, compiler, and framework teams to influence next-generation GPU architecture and software stack design.

- Contribute to and lead open-source efforts to improve ecosystem performance on AMD platforms.

- Define best practices and guide teams on performance tuning for large-scale training workloads.

- Stay at the forefront of advancements in large-scale training systems and performance optimization techniques.

PREFERRED EXPERIENCE:

- Deep expertise in GPU architecture and performance characteristics (compute units, memory hierarchy, interconnects such as PCIe/Infinity Fabric/RDMA).

- Strong experience with performance profiling tools (e.g., ROCm tools, Nsight-like systems, custom profilers) and bottleneck analysis.

- Proven experience optimizing large-scale distributed training workloads across thousands of GPUs.

- Experience with distributed training frameworks such as Megatron-LM, Torchtitan, MaxText, or equivalent.

- Strong understanding of communication libraries and patterns (e.g., NCCL/RCCL, collective ops, overlap of compute and communication).

- Expertise in ML frameworks (PyTorch, JAX, TensorFlow) with a focus on performance tuning.

- Proficiency in Python and at least one systems language (C++/CUDA/HIP), including debugging and low-level optimization.

- Experience with compiler stacks, kernel optimization, or graph-level optimization is a strong plus.

- Demonstrated technical leadership and ability to influence cross-functional teams.

ACADEMIC CREDENTIALS:

- Ph.D. in Computer Science, Computer Engineering, or a related field preferred, or equivalent industry experience with significant technical impact.

LOCATION:

- San Jose, CA

This role is not eligible for visa sponsorship.

#LI-MV1

#HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.

This posting is for an existing vacancy.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 10127278
  • Position Id: b3385605a43ab7e6f21f22aaf4ff9df1
  • Posted 1 day ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Santa Clara, California

Today

Full-time

USD 198,100.00 per year

San Jose, California

Today

Full-time

USD 198,100.00 per year

San Jose, California

Today

Full-time

USD 198,100.00 per year

San Jose, California

Today

Full-time

USD 220,500.00 per year

Search all similar jobs