ML Engineer - Automated Evaluation and Adversarial Design

Culver City, CA, US • Posted 4 days ago • Updated 1 day ago
Full Time
On-site
Fitment

Dice Job Match Score™

🔗 Matching skills to job...

Job Details

Skills

  • Testing
  • Test Suites
  • FOCUS
  • Test Cases
  • Stress Testing
  • Computer Science
  • Statistics
  • Test Methods
  • Python
  • Machine Learning (ML)
  • PyTorch
  • TensorFlow
  • Technical Direction
  • Productivity
  • API
  • Evaluation
  • Artificial Intelligence
  • Workflow
  • Orchestration
  • LangChain
  • Autogen
  • LangSmith

Summary

The Productivity and Machine Learning Evaluation team ensures the quality of AI-powered features across a suite of productivity and creative applications; including Creator Studio, used by hundreds of millions of people. This team serves as the primary evaluation function, providing critical quality signals that directly influence model development decisions and product launches.\\nThis role focuses on building and scaling automated evaluation systems and designing adversarial and stress-testing methodologies across multiple AI features. The work requires a deep understanding of how AI systems fail and how to measure quality rigorously. As features evolve from single-turn interactions into multi-turn, agentic experiences, the evaluation challenge shifts from assessing individual outputs to stress-testing entire conversation flows and agent decision chains. This is an opportunity to shape the evaluation infrastructure that determines whether AI features meet the bar for hundreds of millions of users.\\n

Day-to-day work involves designing, building, and maintaining automated evaluation systems that assess AI feature quality at scale, including multi-turn conversation evaluation and end-to-end agent workflow testing. This includes creating adversarial test suites that probe model weaknesses and running stress tests to ensure features perform under demanding conditions, with particular focus on failure modes that only emerge across extended interactions, such as: context degradation, goal drift, and compounding errors.\nTypical deliverables include: evaluation frameworks and rubrics, quality assessment reports, adversarial test case libraries, multi-turn stress-test pipelines, and recommendations on model readiness.

Bachelor's degree in Computer Science, Machine Learning, Statistics, or a related field \n4+ years of experience building or significantly extending ML evaluation systems, including designing evaluation benchmarks or quality assessment frameworks including evaluation of sequential or multi-step AI outputs \nExperience independently defining evaluation architecture and methodology for AI or ML systems with the ability to design evaluation approaches where the unit of analysis is a conversation or session rather than a single output \nExperience designing adversarial or red-teaming test methodologies for ML models or AI-powered features including adversarial scenarios that target failures across multi-turn interactions \nExperience with Python and ML frameworks (PyTorch, TensorFlow, or equivalent) in production or near-production settings \nTrack record of owning technical direction for evaluation efforts across multiple features or product areas

Experience evaluating user-facing AI features in consumer applications, with an understanding of how technical metrics connect to user-perceived quality \nFamiliarity with productivity software or creative tools, with the ability to assess output quality from a user workflow perspective \nExperience ensuring alignment between automated and human evaluation methods, including inter-annotator agreement analysis and bias detection \nTrack record of designing evaluation systems that scale across multiple features or product areas without requiring bespoke solutions for each \nExperience evaluating different types of AI systems, including API-based and custom-trained models \nDemonstrated ability to communicate evaluation findings and readiness assessments to cross-functional partners \nExperience leveraging automation to scale evaluation data generation and analysis \nExperience building evaluation pipelines for conversational AI, dialogue systems, or agentic workflows, including turn-level and session-level automated scoring \nFamiliarity with agent orchestration frameworks (LangChain, LangGraph, CrewAI, AutoGen) and observability tooling (LangSmith, Braintrust, Arize), with an understanding of how to instrument and evaluate multi-step agent runs \nExperience designing adversarial tests for tool-use reliability, function-calling accuracy, or agent planning quality \nGraduate degree in a relevant field
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90733111
  • Position Id: 3b78d73bdd3e1dc4d8da3532a7a53e09
  • Posted 4 days ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Culver City, California

Yesterday

Full-time

Los Angeles, California

Yesterday

Full-time

Culver City, California

Yesterday

Full-time

Culver City, California

Yesterday

Full-time

Search all similar jobs