Candidates must be authorized to work in the U.S. without sponsorship
Position: Gen AI Agentic
Location: Dallas, TX or Charlotte, NC
Must Have Skills:
GEN AI
Agentic AI
ML Ops
Python
ML
Data Science
RAG
LLM
Nice to Have Skills:
Google Cloud Platform
Prompt Engineering
Detailed Job Description:
Key Responsibilities:
Design and implement Generative AI models for text, image, or multimodal applications.
Develop prompt engineering strategies and embedding-based retrieval systems.
Integrate Gen AI capabilities into web applications and enterprise workflows.
Build agentic AI applications with context engineering and MCP tools.
Required Skills & Qualifications:
10+ years of hands-on experience in AI, Data science, ML, GEN AI.
Strong hands on experience designing and deploying Retrieval-Augmented Generation (RAG) pipelines
Strong MLOps/LLMOps experience with CI/CD automation,
Extensive experience with LangChain, LangGraph, and agentic AI patterns including routing, memory, multi-agent orchestration, guardrails, and failure recovery.
Experience in Cloud-native engineering across AWS (SageMaker, Lambda, ECS/Fargate, S3, API Gateway, Step Functions) and Google Cloud Platform (Vertex AI) for scalable AI delivery
Experience in Developing microservices and API development using FastAPI, REST APIs, Pydantic/JSON schemas, Docker, and Kubernetes for low-latency serving.
Strong Hands-on experience with vector databases and semantic search technologies including Pinecone, FAISS, ChromaDB, and embedding lifecycle management
Strong proficiency in Python and AI/ML frameworks (PyTorch, TensorFlow).
Hands on experience using session and memory for building multi-agent systems along with using MCP tools.
Hands-on experience with LLMs, transformers, and Hugging Face ecosystem.
Knowledge and experience with vector databases and RAG technique for semantic search.
Familiarity with cloud AI services (AWS SageMaker, Azure OpenAI, Google Cloud Platform Vertex AI).
Understanding of MLOps practices for scalable AI deployment.
Strong experience in working with LLM fine-tuning with LoRA, QLoRA, PEFT,
Strong experience in Architected advanced RAG systems using Pinecone, FAISS, Weaviate, Chroma, hybrid retrieval, and custom embeddings,
Strong experience in Designing end-to-end LLMOps/MLOps pipelines using MLflow, DVC, SageMaker Pipelines, Vertex AI Pipelines, and GitHub Actions
Experience in using cloud-native AI systems on AWS (SageMaker, Lambda, EKS, EC2, Step Functions, S3, Glue) and Google Cloud Platform Vertex AI, supporting high-volume inference and secure enterprise operations
Experience in developing multi-agent orchestration workflows using LangGraph and CrewAI for tool-calling, validation agents, automated reasoning, and workflow supervision