Job Title: Senior Cloud GenAI Governance Engineer
Location: Charlotte, NC (Hybrid – 3 Days Onsite Required)
Job Description:
We are seeking a Senior Cloud GenAI Governance Engineer to support enterprise-scale Generative AI platform engineering and AI runtime governance initiatives. The ideal candidate will have deep expertise in GenAI/LLM infrastructure, AI security guardrails, observability, cloud-native infrastructure, and distributed inference operations.
Key Responsibilities:
• Design and implement enterprise AI governance and runtime security frameworks for Generative AI and LLM platforms
• Develop and manage AI guardrails, compliance controls, prompt filtering, and runtime security solutions
• Implement and support Model Armor / AI Armor solutions for secure inference and request/response inspection
• Utilize Arize AI for AI observability, telemetry, inference tracing, and model behavior analysis
• Support API lifecycle tracing, API gateway operations, and distributed inference debugging
• Troubleshoot runtime bottlenecks, rate limiting, token throughput issues, and gateway failures
• Manage Kubernetes-based AI infrastructure and Terraform/IaC deployments across cloud environments
• Maintain and optimize Azure and Google Cloud Platform cloud landing zones supporting enterprise AI workloads
• Collaborate with platform engineering, security, and infrastructure teams to support responsible AI initiatives
Required Skills:
• Strong experience with Generative AI, GenAI platforms, LLM infrastructure, and AI runtime governance
• Expertise in AI security, AI guardrails, compliance, and observability frameworks
• Hands-on experience with Model Armor / ArmorCode or related AI security tooling
• Experience with Arize AI or enterprise AI observability platforms
• Strong Kubernetes (K8s) administration and troubleshooting skills
• Experience with Terraform and Infrastructure as Code (IaC)
• Strong cloud infrastructure experience with Azure and/or Google Cloud Platform
• Experience with APIs, API Gateways, inference runtime management, and distributed AI serving architectures
• Strong understanding of enterprise AI platform operations and cloud networking
Preferred Qualifications:
• Experience with distributed inference systems and LLM serving platforms
• Exposure to responsible AI and enterprise AI governance frameworks
• Experience supporting large-scale AI infrastructure environments
• Strong troubleshooting and incident management skills in cloud-native ecosystems
eye