Overview
Skills
Job Details
Position: AI/ML Engineer
Location: Stanford University (Hybrid)
Position Overview
The AI/ML Engineer is a key technical contributor driving CGOE s AI transformation initiatives. This role focuses on building and deploying intelligent, cloud-native applications from GenAI-powered systems and retrieval-augmented assistants to data-driven automation workflows.
Working at the intersection of machine learning, cloud engineering, and educational innovation, the engineer translates complex needs into scalable, secure, and maintainable AWS-native AI systems that enhance teaching, learning, and operations across CGOE s global online programs.
Key Responsibilities
AI Application & Systems Development
- Own the design and end-to-end implementation of AI systems combining GenAI, narrow AI, and traditional ML models (e.g., regression, classification).
- Implement retrieval-augmented generation (RAG), multi-agent, and protocol-based AI systems (e.g., MCP).
- Integrate AI capabilities into production-grade applications using serverless and containerized architectures (AWS Lambda, Fargate, ECS).
- Fine-tune and optimize existing models for specific educational and administrative use cases, focusing on performance, latency, and reliability.
- Build and maintain data pipelines for model training, evaluation, and monitoring using AWS services such as Glue, S3, Step Functions, and Kinesis.
Cloud & Infrastructure Engineering
- Architect and manage scalable AI workloads on AWS, leveraging services such as SageMaker, Bedrock, API Gateway, EventBridge, and IAM-based security.
- Build microservices and APIs to integrate AI models into applications and backend systems.
- Develop automated CI/CD pipelines ensuring continuous delivery, observability, and monitoring of deployed workloads.
- Apply containerization best practices using Docker and manage workloads through AWS Fargate and ECS for scalable, serverless orchestration and reproducibility.
- Ensure compliance with Stanford and regulatory standards (FERPA, GDPR) for secure data handling and governance.
Collaboration, Culture & Continuous Improvement
- Collaborate closely with cross-functional teams to deliver integrated and impactful AI solutions.
- Use Git-based version control and code review best practices as part of a collaborative, agile workflow.
- Operate within an agile, iterative development culture, participating in sprints, retrospectives, and planning sessions.
- Continuously learn and adapt to emerging AI frameworks, AWS tools, and cloud technologies. Contribute to documentation, internal knowledge sharing, and mentoring as the team scales.
Experience
3+ years of experience developing and deploying AI/ML-driven applications in production. 2+ years of hands-on experience with AWS-based architectures (serverless, microservices,
CI/CD, IAM).
Proven ability to design, automate, and maintain data pipelines for model inference, evaluation, and monitoring.
Experience with both GenAI and traditional ML techniques in applied, production settings.
Education & Certifications
Bachelor s degree in Computer Science, AI/ML, Data Engineering, or a related field (Master s preferred).
AWS certification preferred (Solutions Architect, Developer, or equivalent); Professional-level certification a plus.