Overview
On Site
USD 181,100.00 - 318,400.00 per year
Full Time
Skills
Software Architecture
Distribution
Cloud Computing
Algorithms
Real-time
IT Management
Artificial Intelligence
IaaS
Large Language Models (LLMs)
Server Farms
Roadmaps
FOCUS
Load Balancing
Collaboration
Computer Hardware
Performance Analysis
Optimization
Network
GPU
CUDA
High Performance Computing
C
C++
Python
Parallel Computing
Communication
InfiniBand
Remote Direct Memory Access
PyTorch
JAX
TensorFlow
Training
Machine Learning (ML)
Payments
Job Details
Apple Silicon GPU SW architecture team is seeking a senior/principal engineer to lead server-side ML acceleration and multi-node distribution initiatives. You will help define and shape our future GPU compute infrastructure on Private Cloud Compute that enables Apple Intelligence. 
Description In this role, you'll be at the forefront of architecting and building our next-generation distributed ML infrastructure, where you'll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs, optimizing every layer of the stack-from low-level memory access patterns to high-level distributed algorithms-to achieve maximum hardware utilization while minimizing latency for real-time user experiences. You'll work at the intersection of cutting-edge ML systems and hardware acceleration, collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics, while simultaneously building the production systems that will serve billions of requests daily. This is a hands-on technical leadership position where you'll not only architect these systems but also dive deep into performance profiling, implement novel optimization techniques, and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apple's secure cloud infrastructure.
Responsibilities
Minimum Qualifications
Preferred Qualifications
Pay & Benefits At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $181,100 and $318,400, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
Description In this role, you'll be at the forefront of architecting and building our next-generation distributed ML infrastructure, where you'll tackle the complex challenge of orchestrating massive network models across server clusters to power Apple Intelligence at unprecedented scale. It will involve designing sophisticated parallelization strategies that split models across many GPUs, optimizing every layer of the stack-from low-level memory access patterns to high-level distributed algorithms-to achieve maximum hardware utilization while minimizing latency for real-time user experiences. You'll work at the intersection of cutting-edge ML systems and hardware acceleration, collaborating directly with silicon architects to influence future GPU designs based on your deep understanding of inference workload characteristics, while simultaneously building the production systems that will serve billions of requests daily. This is a hands-on technical leadership position where you'll not only architect these systems but also dive deep into performance profiling, implement novel optimization techniques, and solve unprecedented scaling challenges as you help define the future of AI experiences delivered through Apple's secure cloud infrastructure.
Responsibilities
- Design and implement tensor/data/expert parallelism strategies for large language model inference across distributed server cluster environments
- Drive hardware and software roadmap decisions for ML acceleration
- Expert in designing architectures that achieves peak compute utilizations and optimal memory throughput
- Develop and optimize distributed inference systems with focus on latency, throughput, and resource efficiency across multiple nodes
- Architect scalable ML serving infrastructure supporting dynamic model sharding, load balancing, and fault tolerance
- Collaborate with hardware teams on next-generation accelerator requirements and software teams on framework integration
- Lead performance analysis and optimization of ML workloads, identifying bottlenecks in compute, memory, and network subsystems
- Drive adoption of advanced parallelization techniques including pipeline parallelism, expert parallelism, and various other emerging approaches
Minimum Qualifications
- Strong knowledge of GPU programming (CUDA, ROCm) and high-performance computing
- Must have excellent system programming skills in C/C++, Python is a plus
- Deep understanding of distributed systems and parallel computing architectures
- Experience with inter-node communication technologies (InfiniBand, RDMA, NCCL) in the context of ML training/inference
- Understand how tensor frameworks (PyTorch, JAX, TensorFlow) are used in distributed training/inference
- Technical BS/MS degree
Preferred Qualifications
- Familiar with model development lifecycle from trained model to large scale production inference deployment
- Proven track record in ML infrastructure at scale
Pay & Benefits At Apple, base pay is one part of our total compensation package and is determined within a range. This provides the opportunity to progress as you grow and develop within a role. The base pay range for this role is between $181,100 and $318,400, and your base pay will depend on your skills, qualifications, experience, and location.
Apple employees also have the opportunity to become an Apple shareholder through participation in Apple's discretionary employee stock programs. Apple employees are eligible for discretionary restricted stock unit awards, and can purchase Apple stock at a discount if voluntarily participating in Apple's Employee Stock Purchase Plan. You'll also receive benefits including: Comprehensive medical and dental coverage, retirement benefits, a range of discounted products and free services, and for formal education related to advancing your career at Apple, reimbursement for certain educational expenses - including tuition. Additionally, this role might be eligible for discretionary bonuses or commission payments as well as relocation. Learn more about Apple Benefits.
Note: Apple benefit, compensation and employee stock programs are subject to eligibility requirements and other terms of the applicable plan or program.
Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant .
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.