About Us
We are a small, Agile/Scrum team delivering AI and digital transformation solutions to enterprise clients, including Fortune 500 businesses. Our team thrives on collaboration, innovation, and the freedom to experiment with the latest technologies. We move fast, stay curious, and take pride in delivering expert-level work to organizations navigating the rapidly evolving AI landscape.
About the Role
We are seeking an experienced AI Governance Specialist to serve as a trusted advisor to enterprise clients navigating the complexities of AI infrastructure strategy. In this role, you will assess client environments—whether cloud-based, on-premises, or hybrid—and deliver qualified, actionable recommendations that align AI capabilities with business objectives, regulatory requirements, and organizational risk tolerance.
What You''ll Do Day-to-Day
· Meet with enterprise clients to understand their current AI infrastructure, business goals, and risk posture, then develop tailored governance strategies and implementation roadmaps.
· Conduct comprehensive assessments of AI environments across cloud platforms (Azure, AWS, Google Cloud Platform), on-premises data centers, and hybrid architectures to evaluate readiness, compliance gaps, and optimization opportunities.
· Develop and present AI governance frameworks addressing data privacy, model transparency, bias mitigation, regulatory compliance (GDPR, NIST AI RMF, EU AI Act, SOC 2, HIPAA), and ethical AI principles.
· Conduct AI-specific threat modeling using frameworks such as MAESTRO (Multi-Agent Environment, Security, Threat, Risk, and Outcome) for agentic AI systems and LINDDUN for privacy threat analysis, complementing traditional approaches like STRIDE and PASTA to address risks unique to autonomous, learning systems.
· Advise executive stakeholders and technical leadership on AI deployment strategies, including infrastructure selection, cost-benefit analysis, and total cost of ownership modeling for cloud vs. on-prem vs. hybrid architectures.
· Design and implement AI governance policies covering model lifecycle management, data lineage, access controls, audit trails, and responsible AI practices.
· Evaluate and recommend tooling and platforms for AI observability, model monitoring, drift detection, and performance benchmarking within client environments.
· Partner with internal engineering and sales teams to translate governance requirements into technical solution architectures and proposal deliverables.
· Stay current on evolving AI regulations, industry standards, and emerging governance frameworks to proactively advise clients on compliance and risk mitigation.
· Facilitate workshops, executive briefings, and technical deep-dives with enterprise clients to build consensus around AI governance roadmaps.
· Participate in Agile ceremonies including sprint planning, standups, and retrospectives with the delivery team.
Required Qualifications
· 8+ years of experience in AI/ML engineering, cloud architecture, IT governance, or a closely related field, with at least 3 years focused specifically on AI governance, compliance, or risk management.
· Deep understanding of enterprise cloud platforms (Azure, AWS, Google Cloud Platform) and on-premises AI infrastructure, including GPU compute, networking, storage, and security considerations.
· Demonstrated expertise in AI regulatory frameworks and standards such as NIST AI RMF, EU AI Act, ISO/IEC 42001, SOC 2, and GDPR as they pertain to AI systems.
· Proficiency in AI-specific threat modeling frameworks, particularly MAESTRO for agentic AI threat analysis across its seven-layer reference architecture (foundation models, data operations, agent frameworks, deployment infrastructure, evaluation and observability, security and compliance, agent ecosystem) and LINDDUN for systematic privacy threat identification. Ability to extend and complement traditional threat modeling approaches (STRIDE, PASTA) with these AI-focused methodologies.
· Hands-on experience with AI/ML lifecycle management tools, model registries, and monitoring platforms.
· Strong understanding of data governance principles, including data classification, lineage, sovereignty, and privacy-preserving techniques (federated learning, differential privacy).
· Proven ability to engage C-level executives, translate complex technical concepts into business-aligned recommendations, and drive consensus in enterprise settings.
· Excellent written and verbal communication skills, with experience producing governance documentation, executive presentations, and compliance reports.
Preferred Qualifications
· Relevant certifications: CISA, CRISC, CGEIT, AWS/Azure/Google Cloud Platform AI or Solutions Architect certifications, or IAPP privacy certifications (CIPP, CIPM).
· Experience advising regulated industries such as financial services, healthcare, government, or energy.
· Familiarity with responsible AI toolkits (Microsoft Responsible AI Toolkit, IBM AI Fairness 360, Google What-If Tool).
· Background in developing or auditing AI systems for fairness, explainability, and accountability.
· Familiarity with OWASP GenAI Security Project resources (including the Multi-Agentic System Threat Modeling Guide) and Cloud Security Alliance (CSA) AI security publications.
Education
· Bachelor''s degree in Computer Science, Information Security, Data Science, Engineering, or a related field required.
· Master''s degree in a relevant discipline preferred.
· Equivalent professional experience and certifications will be considered in lieu of formal education.
What We Offer
· Competitive salary and comprehensive benefits package
· Fully remote work environment
· Predictable Monday–Friday, 9:00 AM – 5:00 PM ET schedule with weekends off
· A small, collaborative Agile/Scrum team where your voice matters
· Work with Fortune 500 clients and cutting-edge enterprise AI challenges
· Freedom to experiment and learn with the latest technologies
· Professional development support and continuous learning opportunities
·