Platform ML Engineering Manager, Inference

    • OpenAI
  • San Francisco, CA
  • Posted 52 days ago | Updated 3 hours ago

Overview

On Site
USD 440,000.00 - 530,000.00 per year
Full Time

Skills

Use cases
Engineering management
Artificial intelligence
Machine Learning (ML)
Art
Training
Interfaces
Collaboration
Leadership
Research
Recruiting
Internal communications
Software deployment
Spectrum
SAP BASIS
Privacy
Policies

Job Details

About the Team

The Platform ML team builds the ML side of our state-of-the-art internal training framework used to train our cutting-edge models. We work on distributed model execution as well as the interfaces and implementation for model code, training, and inference.

Our priorities are to maximize training throughput (how quickly we can train a new model) and researcher throughput (how quickly we can develop new models) with the goal of accelerating progress towards AGI. We frequently collaborate with other teams to speed up the development of new capabilities.

About the Role

We are looking for an experienced engineering manager to help lead critical work on our shared internal inference stack and grow the team. Our inference stack is primarily built by the Applied AI engineering team and we will improve and extend it for research use cases.

In this role, you will:
  • Get SOTA throughput for our most important research models.
  • Reduce the time it takes to get efficient inference for new model architectures.
  • Collaborate closely with Applied AI engineering to maximize the benefits of our shared internal inference stack.
  • Hire world-class AI systems engineers in one of the most competitive hiring markets.
  • Coordinate the inference needs of OpenAI's research teams.
  • Create a diverse, equitable, and inclusive culture that makes all feel welcome while enabling radical candor and the challenging of group think.

You might thrive in this role if you:
  • Have 3+ years of experience in engineering management and 7+ years as an IC working with high scale distributed systems and ML systems.
  • Have experience with ML systems, particularly high scale distributed training or inference for modern LLMs.
  • Have familiarity with the latest AI research and working knowledge of how these systems are efficiently implemented.
  • Care deeply about diversity, equity, and inclusion, and have a track record of building inclusive teams.


About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or any other legally protected status.

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link .

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Annual Salary Range

$440K - $530K USD