Position : AI/Ml Engineer
Location: Charlotte, NC or Dallas, TX (Onsite – 5 days/week)
Additional Requirement:
2 official references (current and past) with LinkedIn URL, official email address, and contact number are mandatory
Role Overview
We are seeking an experienced AI Engineer specializing in Generative AI and Agentic Systems to design and build enterprise-grade AI applications using modern large language model (LLM) architectures.
The ideal candidate will focus on developing agentic AI systems, implementing advanced RAG and GraphRAG pipelines, and building scalable APIs that integrate LLMs with enterprise data platforms. This role requires strong expertise in AI engineering, cloud deployment, and LLMOps practices to deliver robust, production-ready AI solutions.
Key Responsibilities
· Design and develop GenAI applications leveraging foundation models and advanced architectures such as GraphRAG
· Build autonomous AI agents using modern agentic frameworks
· Develop and deploy RAG-based services using Python frameworks such as FastAPI, along with Docker and cloud platforms
· Build scalable REST APIs to support LLM-driven applications integrated with enterprise data sources
· Implement LLM evaluation frameworks using tools like Ragas, LangSmith, or custom benchmarking approaches
· Apply LLMOps/MLOps best practices including CI/CD pipelines, prompt management, automated testing, and monitoring
· Develop systems utilizing embeddings, knowledge graphs, and ontology extraction
· Collaborate with cross-functional engineering teams to design and enhance GraphRAG and agentic AI architectures
Required Qualifications
· Bachelor’s or Master’s degree in Computer Science, AI/ML, or a related field
· 5+ years of experience in AI/ML-focused software engineering
· Proven experience building and deploying LLM-based or agentic AI systems in production environments
· Strong programming expertise in Python and modern AI frameworks
· Hands-on experience with RAG/GraphRAG implementations and large-scale embedding systems
· Experience deploying AI workloads on AWS, Azure, or Google Cloud Platform
· Familiarity with LLMOps/MLOps tools and model evaluation frameworks
· Strong problem-solving, communication, and collaboration skills