Job Title: Principal GenAI Engineer Knowledge Graph & Semantic Systems
Location: Onsite New York City, NY
Duration: 6 Months (Extendable)
Interview Process: Coding Round Technical Interview End Client Interview (3 4 rounds total)
Job Description
About the Role
We are seeking a Principal Generative AI Engineer with deep expertise in Large Language Models (LLMs) and Knowledge Graph systems to lead the design and implementation of enterprise-scale AI solutions.
This role focuses on building Graph-powered Retrieval-Augmented Generation (Graph-RAG) systems that combine structured semantic reasoning with advanced LLM architectures to deliver scalable, explainable, and production-ready AI platforms.
The ideal candidate will have strong experience developing knowledge graph architectures, semantic data models, and hybrid AI retrieval systems that integrate structured graph reasoning with modern generative AI frameworks.
Key Responsibilities
Generative AI System Development
Design and develop enterprise-grade generative AI applications powered by LLMs.
Implement Retrieval-Augmented Generation (RAG) pipelines and AI agent workflows.
Develop scalable AI services and APIs for enterprise deployment.
Integrate LLM-based applications with enterprise data systems.
Knowledge Graph Architecture
Design and scale enterprise knowledge graph architectures supporting complex data relationships.
Develop ontologies, taxonomies, and semantic data models to structure enterprise knowledge.
Implement entity resolution, relationship extraction, and graph enrichment processes.
Build semantic knowledge layers that enhance reasoning capabilities of AI systems.
Graph-RAG & Hybrid Retrieval Systems
Build Graph-RAG architectures that combine knowledge graphs with vector-based retrieval.
Integrate structured graph reasoning with LLMs to improve response accuracy and reduce hallucinations.
Develop hybrid search architectures combining knowledge graphs and vector databases.
Enable explainable AI capabilities through graph-based reasoning and semantic context.
AI Infrastructure & Deployment
Deploy GenAI systems in cloud environments such as AWS, Azure, or Google Cloud Platform.
Design scalable infrastructure for LLM orchestration, graph databases, and vector search systems.
Optimize performance and reliability of production AI systems.
Technical Leadership
Provide technical leadership and architecture guidance for complex AI implementations.
Collaborate with cross-functional teams including data scientists, ML engineers, and platform engineers.
Guide best practices for AI architecture, knowledge graph modeling, and GenAI deployment.
Required Qualifications
10+ years of experience in machine learning, artificial intelligence, or related engineering fields.
2+ years of hands-on experience with Large Language Models (LLMs) including:
5+ years of production experience working with Knowledge Graph technologies.
Strong programming skills in Python.
Experience with LangChain, LangGraph, or similar AI orchestration frameworks.
Strong proficiency in SQL and data modeling.
Experience deploying AI solutions in AWS, Azure, or Google Cloud Platform environments.
Mandatory Knowledge Graph Expertise
Candidates must demonstrate strong experience in the following areas:
Designing and scaling enterprise knowledge graph architectures.
Developing ontologies, taxonomies, and semantic data models.
Implementing entity resolution and relationship extraction pipelines.
Working with graph databases such as:
Strong hands-on experience with Cypher or similar graph query languages.
Building hybrid retrieval systems combining knowledge graphs with vector databases.
Integrating graph-based reasoning with LLM systems to improve accuracy, explainability, and reliability.
Technical Skills Summary
Programming
AI / GenAI
AI Frameworks
Knowledge Graph Technologies
Graph Databases
Cloud Platforms
Role Overview
This is a principal-level engineering role focused on designing enterprise-grade GenAI architectures that combine Knowledge Graph reasoning with modern LLM systems.
The ideal candidate will have experience delivering production-ready AI platforms that support explainable AI, semantic reasoning, and scalable enterprise knowledge systems.