ML Engineer - Evaluation Analysis, Metric and Data Strategy

Culver City, CA, US • Posted 1 day ago • Updated 10 hours ago
Full Time
On-site
Fitment

Dice Job Match Score™

🛠️ Calibrating flux capacitors...

Job Details

Skills

  • Leadership
  • Dashboard
  • Applied Mathematics
  • Computer Science
  • Science
  • Data Science
  • Research
  • FOCUS
  • Statistics
  • Testing
  • Estimating
  • Design Of Experiments
  • Python
  • Pandas
  • scikit-learn
  • R
  • Data Analysis
  • Visualization
  • Productivity
  • Data Collection
  • Analytical Skill
  • Artificial Intelligence
  • Auditing
  • Orchestration
  • LangChain
  • Autogen
  • Microsoft Certified Professional
  • Machine Learning (ML)
  • Management
  • Evaluation

Summary

The Productivity and Machine Learning Evaluation team ensures the quality of AI-powered features across a suite of productivity and creative applications; including Creator Studio, used by hundreds of millions of people. This team serves as the primary evaluation function, and its analysis directly informs decisions about model development, feature launches, and product direction. \\nThis role is the analytical core of the team; responsible for making sense of evaluation signals and real-world user behavior. The work involves designing feature-level quality metrics, collaborating with partner teams on data collection strategies, and translating evaluation data into concise, actionable insights that drive decisions. This is an opportunity to define how AI feature quality is measured and to directly shape what gets shipped. As AI features evolve into multi-turn, agentic experiences, this role will define what "quality" means when the unit of evaluation is a conversation, not a single response.

Day-to-day work involves analyzing evaluation results, identifying trends, regressions, and segment-level patterns across multiple AI features. This includes collaborating with partner teams on data collection strategies, ensuring evaluation data is representative of real-world usage, and designing the metrics framework that leadership uses to make decisions on AI features. \nTypical deliverables include: feature-level quality metrics and dashboards, evaluation analysis reports, data collection requirements, dataset representativeness audits, multi-turn evaluation frameworks and session-level scoring rubrics, and concise metric summaries for decision-makers.\n

Bachelor's degree in Statistics, Data Science, Applied Mathematics, Computer Science, or a related quantitative field \n5+ years of experience in applied science, data science, or evaluation research, with a focus on defining and operationalizing quality metrics \nExperience with statistical analysis methods including significance testing, sampling design, effect size estimation, and experimental design \nExperience working with production user data, understanding its biases and limitations compared to controlled evaluation data, including familiarity with sequential interaction data where context and turn order affect quality assessment \nAbility to design evaluation approaches where the unit of analysis is a session or conversation rather than a single model output \nTrack record of independently designing metrics frameworks and driving data-informed decisions across cross-functional teams \nProficiency in Python (pandas, scipy, scikit-learn) or R for data analysis and visualization

Experience designing evaluation or quality metrics for AI-powered or ML-driven features in consumer-facing products \nFamiliarity with productivity software or creative applications, with an ability to distinguish between technically correct and genuinely useful AI outputs \nExperience partnering with engineering or data teams to define data collection requirements and schemas \nTrack record of translating complex analytical findings into concise recommendations for non-technical decision-makers \nExperience evaluating tool-use accuracy, retrieval quality, or function-calling reliability within AI systems \nExperience with evaluation methodology including inter-annotator agreement, evaluation bias detection, and dataset representativeness auditing \nFamiliarity with agentic orchestration frameworks (LangChain, LangGraph, CrewAI, AutoGen) and emerging agent interoperability protocols (A2A, MCP), with an understanding of how architectural choices in agent design affect evaluability \nUnderstanding of ML model development processes, with the ability to specify what evaluation signals are useful for model improvement \nExperience managing evaluation across multiple features or product areas simultaneously, with systematic rather than ad-hoc approaches \nGraduate degree in a relevant quantitative field
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90733111
  • Position Id: 54ca211f01aa413d10fba111b3be3763
  • Posted 1 day ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Culver City, California

Today

Full-time

Culver City, California

Today

Full-time

Culver City, California

Today

Full-time

Los Angeles, California

Today

Full-time

Search all similar jobs