Would you like to play a part in building the next generation of generative AI applications at Apple? We're looking for Machine Learning Engineers to work on ambitious projects that will impact the future of Apple, our products, and the broader world. This role is directed at assessing, quantifying, and improving the safety and inclusivity of Apple's Generative-AI powered features and products. \\n\\nIn this role you'll have the opportunity to tackle innovative problems in machine learning, particularly focused on large language models for text generation, diffusion models for image generation, and mixed model systems for multimodal applications. \\nAs a member of Apple's Responsible AI group you will be working on a wide array of new features and research in the generative AI space. \\nOur team is currently interested in large generative models for vision and language, with particular interest on Responsible AI, safety, fairness, robustness, explainability, and uncertainty in models.\\n\\n\\n
This role focuses on developing, carrying-out, interpreting, and communicating pre- and post-ship evaluations of the safety of Apple Intelligence features. Both human grading and model-based auto-grading are thoughtfully leveraged to power these evaluations. Additionally, this role researches and develops auto-grading methodology & infrastructure to benefit ongoing and future Apple Intelligence safety evaluations.\n\nProducing safety evaluations that uphold Apple's Responsible AI values requires thoughtful data sampling, creation, and curation for evaluation datasets; high quality, detailed annotations and careful auto-grading to assess feature performance; and mindful analysis to understand what the evaluation means for the user experience. \n\nThis role heavily draws on applied data science, scientific investigation and interpretation, cross-functional communication and collaboration, and metrics reporting and presentation.\n\n\n
MS, or PhD in Computer Science, Machine Learning, Statistics, or related fields; or an equivalent qualification acquired through other avenues.\nExperience working with generative models for evaluation and/or product development, and up-to-date knowledge of common challenges and failures.\nStrong engineering skills and experience in writing production-quality code in Python.\nDeep experience in foundation model-based AI programming (i.e.: using DSPy for optimizing foundation model prompts, for example) and a drive to innovate in this space.\nExperience working with noisy, crowd-based data labels and human evaluations.\n\n\n
Experience working in the Responsible AI space.\nPrior scientific research and publication experience.\nStrong organizational and operational skills working with large, multi-functional, and diverse teams.\nCuriosity about fairness and bias in generative AI systems, and a strong desire to help make the technology more equitable.\n\n
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
- Dice Id: 90733111
- Position Id: fd05cde6ae2198b41473d6976b95024b
- Posted 12 hours ago