Job DescriptionInterns at Uber don't just observe - they build. In this PhD-level internship on the Risk Engineering team, you will tackle some of the most complex and high-stakes challenges in fraud and abuse detection, protecting the integrity of Uber's global marketplace.
You will work at the frontier of applied AI, developing and deploying advanced machine learning systems that detect anomalous behavior, prevent abuse, and safeguard millions of real-time transactions across mobility and delivery. From training foundation models to building autonomous multi-agent systems capable of reasoning and collaboration, you'll move beyond research prototypes to deliver scalable, production-ready solutions that operate at global scale.
Embedded within a high-performing engineering team, you will collaborate closely with engineers, data scientists, and product partners. You'll be trusted to own ambitious projects independently under the guidance of experienced mentors - navigating ambiguity, balancing precision and recall trade-offs, and delivering measurable impact in a fast-moving, adversarial environment.
What you'll do- Design and develop novel machine learning algorithms to detect anomalous behavior, coordinated abuse, and emerging fraud patterns in large-scale, high-dimensional data.
- Train and adapt foundation models (including LLMs and vision models) for risk-specific use cases such as identity verification, document understanding, behavioral reasoning, and agent-based decision systems.
- Leverage techniques such as knowledge graphs, similarity search, reinforcement learning, and multi-agent architectures to build intelligent, autonomous risk detection systems.
- Navigate ambiguous and adversarial environments, making thoughtful technical trade-offs between model performance, latency, explainability, and operational impact.
- Collaborate cross-functionally with engineering, product, operations, and data science teams to translate technical innovation into measurable business outcomes.
Basic Qualifications- Currently enrolled in a Ph.D. program in Computer Science, Machine Learning, Statistics, Artificial Intelligence, or a related quantitative field.
- Must have at least one semester or quarter of education remaining following the completion of the 12-week internship.
Preferred Qualifications- Strong Python coding skills, with the ability to write clean, production-quality code
- Deep expertise in one or more areas such as machine learning, anomaly detection, graph learning, reinforcement learning, large language models, computer vision, or multi-agent systems.
- Experience building or researching fraud detection, trust & safety, adversarial ML, or large-scale risk modeling systems.
- Demonstrated ability to deploy models in production environments and work with scalable systems and tools (e.g., Spark, Ray, distributed training frameworks, AWS/Google Cloud Platform).
- Experience designing systems that balance model performance with interpretability, fairness, and operational constraints.
- A track record of impactful research contributions or substantial technical projects in your field.
- A resilient mindset with the curiosity and persistence required to stay ahead of evolving adversarial threats.
For New York, NY-based roles: The base hourly rate amount for this role is USD$67.00 per hour.
For Sunnyvale, CA-based roles: The base hourly rate amount for this role is USD$67.00 per hour.
For all US locations, you will also be eligible for various benefits.