About the Role A1 is building a proactive AI system that understands context across conversations, plans actions, and carries work forward over time.
As Technical Lead, Machine Learning, you own the execution layer of A1's intelligence. You translate research direction into reliable, scalable, production-grade ML systems.
This role sits at the intersection of research, infrastructure, and product. You are responsible for making models trainable, deployable, observable, and performant under real-world constraints.
What You'll Do - Own end-to-end ML system execution: data pipelines, training workflows, evaluation systems, inference architecture, and deployment.
- Fine-tune and adapt models using state-of-the-art methods such as LoRA, QLoRA, SFT, DPO, and distillation.
- Architect and operate scalable inference systems, balancing latency, cost, and reliability.
- Design and maintain data systems for high-quality synthetic and real-world training data.
- Implement evaluation pipelines covering performance, robustness, safety, and bias, in partnership with research leadership.
- Own production deployment, including GPU optimization, memory efficiency, latency reduction, and scaling policies.
- Collaborate closely with application engineering to integrate ML systems cleanly into backend, mobile, and desktop products.
- Make pragmatic trade-offs and ship improvements quickly, learning from real usage.
- Work under real production constraints: latency, cost, reliability, and safety
Outcomes - Research and models reliably translate into production-ready solutions with clear performance and quality targets.
- ML pipelines, training loops, and inference systems are stable, efficient, and maintainable.
- Production issues are detected, debugged, and resolved quickly, minimizing user impact.
- Team members are supported, aligned, and able to deliver high-impact ML work with minimal friction.
- Iterations on models and systems are measurable, safe, and improve user experience over time.
Tech Stack - Python
- PyTorch / JAX
- GPU-based training and inference system
Ideal Experience - You have built or shipped real ML systems used by people, not just demos.
- You are comfortable working with large models and understanding their failure modes.
- You write strong, production-grade code and care about system correctness.
- You are self-directed, pragmatic, and take full ownership of outcomes.
- You communicate clearly and collaborate well in small, high-trust teams.
How We Work The best products today in the world were built by small, world class teams. We are a high talent density and hands-on team. We make decisions collectively, move at rapid speed, striking a balance between shipping high quality work and learning. Joining our team requires the ability to bring structure, exercise judgment, and execute independently. Our goal is to put in hands of our users a truly magical product
Interview process If there appears to be a fit, we'll reach to schedule 3, but no more than 4 interviews.
Applications are evaluated by our technical team members. Interviews will be conducted via virtual meetings and/or onsite.
We value transparency and efficiency, so expect a prompt decision. If you've demonstrated the exceptional skills and mindset we're looking for, we'll extend an offer to join us. This isn't just a job offer; it's an invitation to be part of a team that's bringing AI to have practical benefits to billions globally.