Apple Services Engineering (ASE) powers AI and LLM features across App Store, Music, Video, and more. As these systems increasingly rely on LLM Judges and automated evaluators to score model performance at scale, the trustworthiness of those evaluation signals becomes mission-critical. We believe that to build exceptional LLMs, you need exceptional mechanisms to validate the signals used to train and evaluate them.\\n\\n
As a Principal Applied Scientist on the Human Centered AI team, you will be the technical engine behind our Data Quality Validation framework. This is a high-impact individual contributor role for a scientist who wants to architect and build - not just advise. You will own the data science methodology underpinning our data quality validation models, design the statistical frameworks that govern judge reliability, and work hands-on to close the loop between automated evaluation and human ground truth.\n\nYou will be the person who answers the hardest question in our stack: \"Can we trust the evaluators that are evaluating our models?\"\n
Master's degree in Statistics, Data Science, Machine Learning, Computer Science, or a related quantitative field\n8+ years of hands-on experience in applied data science, ML research, or evaluation science\nDeep expertise in uncertainty quantification and model calibration - including entropy modeling and Bayesian approaches\nDemonstrated experience building disagreement detection or anomaly detection models in production ML systems\nStrong command of statistical measurement frameworks - inter-rater reliability, correlation analysis, and statistical process control\nProven experience designing or contributing to Human-in-the-Loop (HITL) or active learning pipelines\nProficiency in Python for statistical modeling, ML experimentation, and data pipeline development\nExceptional ability to translate rigorous statistical methodology into clear, actionable guidance for engineering and product partners
PhD in Statistics, Computer Science, Machine Learning, or a related field\nExperience specifically in LLM evaluation science - including autograder validation, judge-as-a-model frameworks, or RLHF data quality\nHands-on experience with large-scale reasoning models (e.g., 70B+ parameter models) used in chain-of-thought evaluation or meta-reasoning contexts\nExperience defining governance gates or certification pipelines for AI systems in a CI/CD context\nFamiliarity with out-of-distribution detection techniques for identifying input drift in live production systems\nTrack record of publishing or presenting evaluation methodology work internally or externally
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
- Dice Id: 90733111
- Position Id: 2a9b165c257944b4cf3b388ed27e2061
- Posted 20 hours ago