AI Solution Lead Engineer

Remote • Posted 30+ days ago • Updated 5 hours ago
Contract W2
Contract Independent
No Travel Required
Remote
Depends on Experience
Fitment

Dice Job Match Score™

👤 Reviewing your profile...

Job Details

Skills

  • Gen AI
  • Microservices
  • LLM
  • Vertex AI
  • API
  • GCP
  • AWS
  • ML

Summary

Job Title: AI Solution Lead Engineer – Generative AI & LLM Applications

Type: Open for both Fulltime and Contract

Location: Remote

Role Overview

We are seeking an AI Solution Lead Engineer – Generative AI & LLM Applications to design, architect, and build production-grade GenAI solutions for enterprise clients.

This role combines hands-on engineering, solution architecture, and technical leadership, with responsibility for defining best practices, building reusable accelerators, and guiding delivery teams across complex GenAI initiatives.

The first flagship product will focus on Conversational Analytics / GenBI within the Snowflake ecosystem, leveraging AI managed services and native LLM capabilities to enable natural language analytics at scale.

Key Responsibilities

· Lead architecture and development of the initial Conversational Analytics / GenBI product on Snowflake, leveraging Snowflake Cortex AI, semantic models, and AI managed services.

· Design end-to-end GenAI architectures including RAG, Agentic RAG, GraphRAG, Agents, and Multi-Agent Systems for enterprise use cases.

· Define reference architectures, reusable accelerators, and solution blueprints to standardize GenAI delivery.

· Build production-grade Python applications with strong emphasis on code quality, testing, and maintainability.

· Implement microservices architectures using modern frameworks and design patterns such as FastAPI and Redis.

· Lead development of LLM applications using Agents, MCP, Agentic RAG, GraphRAG, and multi-agent orchestration patterns.

· Define and implement LLM evaluation frameworks, including RAG evaluation, prompt evaluation, latency, cost, and quality metrics.

· Apply prompt management best practices and lifecycle governance to improve accuracy and reliability.

· Oversee integration with enterprise cloud and AI platforms including OpenAI, Anthropic Claude, Azure OpenAI, AWS Bedrock, Google Vertex AI, and Snowflake Cortex AI.

· Design and manage containerized deployments using Docker and Kubernetes.

· Apply LLMOps practices including monitoring, observability, prompt/version management, and cost optimization for production systems.

· Lead technical discovery sessions and provide hands-on guidance to engineering teams.

· Collaborate with cross-functional product, data, and platform teams in a client-facing environment.

· Mentor engineers and contribute to knowledge sharing and architectural best practices.

· Drive continuous improvement in system scalability, reliability, and maintainability.

Required Qualifications

· 8–15 years of experience in AI/ML or software engineering, with 2+ years in Generative AI and LLM applications.

· Expert-level Python programming skills with proven production-grade code quality.

· Strong experience with microservices architectures, modern design patterns, FastAPI, and Redis.

· Extensive hands-on experience building GenAI applications, including Agents, MCP ecosystems, RAG, Agentic RAG, and GraphRAG.

· Deep practical knowledge of LangGraph, LangChain, and LLM orchestration frameworks.

· Proven experience integrating OpenAI, Anthropic Claude, Azure OpenAI, AWS Bedrock, Google Vertex AI, and Snowflake Cortex AI.

· Strong experience deploying GenAI solutions on Azure, AWS, Google Cloud Platform, and Snowflake platforms.

· Hands-on experience with vector databases such as Pinecone, Weaviate, Qdrant, or Chroma.

· Solid understanding of Docker and Kubernetes for containerization and orchestration.

· Practical experience with LLM evaluation, RAG evaluation, prompt management, and LLMOps practices.

· Demonstrated ability to deliver scalable, production-ready GenAI systems.

· Strong leadership skills with the ability to guide teams and engage directly with clients.

Nice to Have

· Background in traditional machine learning, including feature engineering, model training, and evaluation.

· Experience with advanced multi-agent systems, Agent-to-Agent (A2A) communication, and MCP-based ecosystems.

· Hands-on experience with LLMOps and observability platforms such as LangSmith, Opik, or Azure AI Foundry.

· Experience with knowledge graphs, hybrid symbolic–LLM systems, or fine-tuning techniques.

· Prior consulting or enterprise client-facing experience.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 91005834
  • Position Id: 8862462
  • Posted 30+ days ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Remote

Today

Easy Apply

Contract, Third Party

Depends on Experience

Remote

19d ago

Easy Apply

Contract

Depends on Experience

Remote

Today

Full-time

Remote

Today

Easy Apply

Third Party, Contract

Search all similar jobs