Overview
Skills
Job Details
We are looking for a Sr. Consultant, Data Management and Governance for a permanent opportunity in Atlanta, GA. Our client is a global organization that leverages technology and innovation to improve the way industries operate, and communities thrive. The company is committed to using data, AI, and digital transformation to drive sustainable impact connecting people, processes, and platforms to deliver smarter, more responsible solutions worldwide.
Role and Responsibilities:
The Senior Consultant, AI Data Management & Governance builds risk-assessment methods and control implementation that keep the company s AI ecosystem MLOps, LLMOps, third-party SaaS assistants, and agentic-process-automation (APA) platforms safe, compliant, and trustworthy. Working with Security, Legal, Data Privacy, Platform Engineering, and Product teams, you will quantify AI-specific risks (e.g., model bias, prompt injection, tool-calling abuse), set enterprise guardrails, and ensure practical controls are embedded in every AI product and platform service.
Key Accountabilities:
Strategic Planning:
- Establish AI-specific risk categories (model risk, data-privacy leakage, third-party SaaS exposure, agent autonomy limits).
- Conduct complex risk assessments that quantify potential business impact and map exposure to the enterprise risk-appetite statement.
Policy Development and Governance:
- Maintain the enterprise AI risk register; score and prioritize risks arising from internal models, external APIs (OpenAI, Gemini, Anthropic), and APA tools.
- Develop due-diligence playbooks for vendor LLMs, SaaS copilots, optimization solvers (e.g., Gurobi Cloud), and hosted agent runtimes.
- Help create and maintain the AI Technology Governance Policy, including requirements for data sourcing, model evaluation, prompt safety, and human-in-the-loop review.
- Align internal standards to external frameworks such as NIST AI RMF, ISO/IEC 42001, and upcoming regional AI Acts (e.g., EU AI Act).
Data Quality:
- Translate policy into technical controls (e.g., model-card metadata, bias tests, prompt-filter APIs, secrets management, lineage tracking) and verify deployment in MLOps/LLMOps pipelines.
- Lead periodic control testing, red-team exercises, and Responsible-AI reviews.
Compliance and Risk Management:
- Monitor global AI-related regulations, map new obligations to policy updates and platform backlog items.
- Coordinate evidence collection for audits and certifications (SOC 2, ISO 27001/42001).
Data Operations and Stewardship:
- Define key risk indicators (KRIs) and performance metrics (e.g., model-drift incidents, unapproved prompt exceptions, vendor AI SLA breaches).
Requirements:
- 5+ years of relevant experience
- 2+ years leading or coaching multidisciplinary teams on emerging-technology risk (AI/ML, cloud SaaS, or automation platforms) Preferred.