Job Title: AI Solution Lead Engineer – Generative AI & LLM Applications
Type: Open for both Fulltime and Contract
Location: Remote
Role Overview
We are seeking an AI Solution Lead Engineer – Generative AI & LLM Applications to design, architect, and build production-grade GenAI solutions for enterprise clients.
This role combines hands-on engineering, solution architecture, and technical leadership, with responsibility for defining best practices, building reusable accelerators, and guiding delivery teams across complex GenAI initiatives.
The first flagship product will focus on Conversational Analytics / GenBI within the Snowflake ecosystem, leveraging AI managed services and native LLM capabilities to enable natural language analytics at scale.
Key Responsibilities
· Lead architecture and development of the initial Conversational Analytics / GenBI product on Snowflake, leveraging Snowflake Cortex AI, semantic models, and AI managed services.
· Design end-to-end GenAI architectures including RAG, Agentic RAG, GraphRAG, Agents, and Multi-Agent Systems for enterprise use cases.
· Define reference architectures, reusable accelerators, and solution blueprints to standardize GenAI delivery.
· Build production-grade Python applications with strong emphasis on code quality, testing, and maintainability.
· Implement microservices architectures using modern frameworks and design patterns such as FastAPI and Redis.
· Lead development of LLM applications using Agents, MCP, Agentic RAG, GraphRAG, and multi-agent orchestration patterns.
· Define and implement LLM evaluation frameworks, including RAG evaluation, prompt evaluation, latency, cost, and quality metrics.
· Apply prompt management best practices and lifecycle governance to improve accuracy and reliability.
· Oversee integration with enterprise cloud and AI platforms including OpenAI, Anthropic Claude, Azure OpenAI, AWS Bedrock, Google Vertex AI, and Snowflake Cortex AI.
· Design and manage containerized deployments using Docker and Kubernetes.
· Apply LLMOps practices including monitoring, observability, prompt/version management, and cost optimization for production systems.
· Lead technical discovery sessions and provide hands-on guidance to engineering teams.
· Collaborate with cross-functional product, data, and platform teams in a client-facing environment.
· Mentor engineers and contribute to knowledge sharing and architectural best practices.
· Drive continuous improvement in system scalability, reliability, and maintainability.
Required Qualifications
· 8–15 years of experience in AI/ML or software engineering, with 2+ years in Generative AI and LLM applications.
· Expert-level Python programming skills with proven production-grade code quality.
· Strong experience with microservices architectures, modern design patterns, FastAPI, and Redis.
· Extensive hands-on experience building GenAI applications, including Agents, MCP ecosystems, RAG, Agentic RAG, and GraphRAG.
· Deep practical knowledge of LangGraph, LangChain, and LLM orchestration frameworks.
· Proven experience integrating OpenAI, Anthropic Claude, Azure OpenAI, AWS Bedrock, Google Vertex AI, and Snowflake Cortex AI.
· Strong experience deploying GenAI solutions on Azure, AWS, Google Cloud Platform, and Snowflake platforms.
· Hands-on experience with vector databases such as Pinecone, Weaviate, Qdrant, or Chroma.
· Solid understanding of Docker and Kubernetes for containerization and orchestration.
· Practical experience with LLM evaluation, RAG evaluation, prompt management, and LLMOps practices.
· Demonstrated ability to deliver scalable, production-ready GenAI systems.
· Strong leadership skills with the ability to guide teams and engage directly with clients.
Nice to Have
· Background in traditional machine learning, including feature engineering, model training, and evaluation.
· Experience with advanced multi-agent systems, Agent-to-Agent (A2A) communication, and MCP-based ecosystems.
· Hands-on experience with LLMOps and observability platforms such as LangSmith, Opik, or Azure AI Foundry.
· Experience with knowledge graphs, hybrid symbolic–LLM systems, or fine-tuning techniques.
· Prior consulting or enterprise client-facing experience.