Staff Machine Learning Engineer - Applied AI

  • San Francisco, CA
  • Posted 49 days ago | Updated moments ago

Overview

On Site
Full Time

Skills

Product Development
Innovation
Optimization
Training
Management
Analytics
Performance Metrics
Product Design
Generative Artificial Intelligence (AI)
Mentorship
Software Engineering
Shipping
Large Language Models (LLMs)
Prompt Engineering
Python
Java
Extract
Transform
Load
Apache Spark
Apache Flink
Product Optimization
Cloud Computing
Amazon Web Services
Google Cloud
Google Cloud Platform
Machine Learning (ML)
PyTorch
TensorFlow
JAX
Communication
Computer Science
Data Science
Scalability
Evaluation
Orchestration
LangChain
Semantics
Artificial Intelligence
Workflow
Law
Legal
Collaboration

Job Details

About the role:

Applied AI at Uber builds intelligent systems that power next-generation product experiences for riders, drivers, merchants, and couriers. As a Staff AI Engineer, you will work end-to-end across product development - from data pipelines and backend integration to real-world AI deployment - building scalable, intelligent, and user-centric experiences.

You will leverage large language models (LLMs) and multimodal AI systems to create production-ready applications, integrating APIs from OpenAI, Anthropic Claude, Google Gemini, and other emerging models. You'll also pioneer LLM-based evaluation methods, including LLM-as-a-judge frameworks that automate assessment of model outputs and enhance product quality.

This is an opportunity for a technical leader who thrives at the intersection of AI, engineering, and product, driving innovation and measurable impact across Uber's ecosystem.

What You'll Do:

- Build end-to-end AI products - from prototype to scalable production deployment - integrating LLMs and multimodal AI into Uber's consumer, earner, and enterprise experiences.
- Implement automated evaluation systems that use LLM-as-a-judge techniques to benchmark model quality, ensure consistency, and accelerate experimentation.
- Design and implement services and APIs that connect to leading AI models (e.g., OpenAI, Claude, Gemini, Mistral), ensuring reliability, latency efficiency, and cost optimization.
- Develop pipelines for training, fine-tuning, and evaluating AI models; manage data ingestion, cleaning, labeling, and experimentation workflows.
- Perform data science and analytics work to understand performance metrics, user behavior, and model outcomes, ensuring responsible and measurable AI impact.
- Collaborate across disciplines (engineering, product, design, and data science) to define user problems and translate them into AI-powered solutions.
- Champion best practices in AI model evaluation, safety, observability, and responsible use of generative AI.
- Mentor engineers and data scientists, fostering a culture of technical excellence and cross-functional learning.

Basic Qualifications:

- 10+ years of experience in software engineering, data science, or machine learning, including a track record of shipping production AI systems.
- Deep understanding of large language models, including fine-tuning, prompt engineering, embeddings, and retrieval-augmented generation (RAG).
- Strong backend engineering skills in Python, Go, or Java, with experience integrating third-party APIs.
- Hands-on experience building data pipelines and ETL systems (e.g., Spark, Airflow, Flink, or similar).
- Ability to analyze data, run experiments, and derive insights for model and product improvement.
- Familiarity with cloud environments (AWS, Google Cloud Platform, or similar) and ML frameworks (PyTorch, TensorFlow, or JAX).
- Excellent communication and collaboration skills across technical and non-technical teams.

Preferred Qualifications:

- Master's or Ph.D. in Computer Science, Data Science, or related field.
- Experience integrating foundation model APIs (OpenAI, Claude, Gemini, Cohere, etc.) into production-grade systems.
- Proven ability to architect AI-powered backend services, optimizing for scalability, latency, and cost efficiency.
- Background in LLM evaluation systems or AI agent orchestration frameworks (LangChain, Semantic Kernel, etc.).
- Demonstrated success leading cross-functional projects that deliver measurable user or business impact.
- Familiarity with multimodal AI (text, speech, and image models) and data-centric development workflows.

Uber's mission is to reimagine the way the world moves for the better. Here, bold ideas create real-world impact, challenges drive growth, and speed fuels progress. What moves us, moves the world - let's move it forward, together.

Uber is proud to be an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please let us know by completing [this form](;br>
Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.