Principal Software Engineer - PyTorch Training Frameworks

San Jose, CA, US • Posted 28 days ago • Updated 9 hours ago
Full Time
On-site
USD 198,100.00 per year
Fitment

Dice Job Match Score™

👤 Reviewing your profile...

Job Details

Skills

  • Data Centers
  • Embedded Systems
  • Collaboration
  • Innovation
  • Management
  • Software Development
  • IT Management
  • Open Source
  • Scalability
  • Communication
  • Workflow
  • Regression Analysis
  • Root Cause Analysis
  • Data Loading
  • Training
  • Debugging
  • Performance Engineering
  • Analysis Of Algorithms
  • Optimization
  • Python
  • C
  • C++
  • PyTorch
  • CUDA
  • Computer Hardware
  • Linux
  • Continuous Integration
  • Technical Communication
  • Team Leadership
  • Mentorship
  • Decision-making
  • Computer Science
  • Computer Engineering
  • Electrical Engineering
  • Machine Vision
  • Military
  • Law
  • Recruiting
  • Artificial Intelligence

Summary

WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

PRINCIPAL SOFTWARE DEVELOPMENT ENGINEER - PYTORCH TRAINING FRAMEWORKS

THE ROLE:

AMD is looking for a Principal-level PyTorch training framework expert to help drive performance, scalability, and correctness of large-scale AI training on AMD Instinct accelerators. You will work at the intersection of PyTorch internals, distributed training, and hardware-aware optimization, partnering closely with compiler, kernel, driver, and architecture teams to deliver industry-leading training performance and developer experience.

THE PERSON:

The ideal candidate is deeply hands-on with PyTorch training and thrives on solving complex systems problems (performance, scaling, memory efficiency, distributed communication). You bring strong technical leadership, can influence architecture across teams, and are comfortable driving ambiguity to crisp execution. You communicate clearly with both engineers and stakeholders and can represent AMD credibly in upstream/open-source discussions.

KEY RESPONSIBILITIES:
  • Act as a technical authority for PyTorch training at AMD, setting direction for performance, scalability, and reliability
    Drive optimization of key PyTorch training workloads (LLMs/foundation models) across single-node and multi-node systems
  • Improve and debug training performance in areas such as DDP/FSDP, gradient checkpointing, mixed precision, memory planning, and communication/computation overlap
  • Partner with ROCm compiler/runtime, kernel, and driver teams to resolve performance bottlenecks and correctness issues across the full stack
  • Contribute to and influence upstream PyTorch (design discussions, code contributions, performance fixes, CI/debug)
  • Develop and maintain representative training benchmarks, profiling workflows, and performance regression detection for key models
  • Lead deep-dive investigations of performance regressions and hard correctness issues; drive cross-team resolution to closure
  • Mentor engineers and raise the bar on framework-quality code, performance engineering practices, and technical rigor
  • Engage with strategic customers/partners on training enablement, root-cause analysis, and best-practices for AMD platforms

PREFERRED EXPERIENCE:
  • Deep experience with PyTorch internals and training systems (Autograd, optimizers, dataloading, compilation paths, runtime behavior)
  • Strong distributed training expertise: DDP, FSDP, tensor/pipeline parallel concepts, collectives (NCCL/RCCL), multi-node debugging
  • Proven track record in performance engineering (profiling, tracing, kernel/runtime analysis, memory optimization, scaling studies)
  • Strong programming skills in Python and C/C++ (ability to land clean, maintainable changes in large codebases)
  • Familiarity with PyTorch ecosystem components such as TorchInductor / torch.compile, Triton, CUDA/HIP-style programming models, and performance tooling
  • Experience working across OS/hardware boundaries in Linux-based environments (containers, CI, drivers/runtimes are a plus)
  • Clear technical communication: design docs, code reviews, stakeholder updates, and cross-team coordination
  • Demonstrated ability to lead through influence (principal-level impact, mentoring, and architectural decision-making)

ACADEMIC CREDENTIALS:
  • Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent

LOCATION:

San Jose, Seattle or Austin are preferred US locations (hybrid). Open to considering other US locations near AMD offices.

#LI-MV1

#HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.

This posting is for an existing vacancy.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 10127278
  • Position Id: 6a04d75d9c82a9b9ea984891dab125cf
  • Posted 28 days ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

San Jose, California

Today

Full-time

USD 198,100.00 per year

San Jose, California

Today

Full-time

USD 198,100.00 per year

Santa Clara, California

Today

Full-time

USD 142,940.00 per year

San Jose, California

Today

Full-time

USD 198,100.00 per year

Search all similar jobs