Designed and built enterprise-grade tooling and self-service capabilities to accelerate deployment of Generative AI solutions across the organization. Developed reusable platform components and scalable architecture to support AI/LLM applications and improve developer experience.
Key Responsibilities
- Built scalable, secure and reusable platform components for deploying AI/GenAI applications using Kubernetes/OpenShift and GitOps.
- Designed and implemented containerized workloads using Docker, Helm, Kustomize and JFrog Artifactory.
- Developed RESTful services and backend components using Python (FastAPI/Flask) for large-scale enterprise AI solutions.
- Implemented AI platform integrations including vector databases, embeddings and LLM frameworks (GPT, Llama, Hugging Face, LangChain).
- Led design decisions for authentication, service communication, deployment strategies (Blue/Green), monitoring and automation.
- Implemented observability using OpenTelemetry, Grafana, Loki and Prometheus.
- Collaborated with cross-functional teams to translate requirements into architecture and technical specifications.
- Contributed to best practices for GenAI, Agentic orchestration, and AI agent frameworks.
- Supported CI/CD pipelines (Jenkins) and DevOps processes to ensure reliability, scalability and security of AI platforms.
Environment / Tools
Python, FastAPI, Flask, Kubernetes/OpenShift, Docker, Helm, Kustomize, GitOps, Jenkins, Kafka, Redis, SQL/NoSQL, OAuth2, OpenTelemetry, Grafana, Prometheus, TensorFlow/PyTorch, LangChain, Hugging Face, GPT/Llama.