Senior AI Consultant (Enterprise AI Assessment & Governance)
Hybrid in Alpharetta, GA, US • Posted 14 days ago • Updated 14 days ago

Improving Corporate Services
Dice Job Match Score™
👤 Reviewing your profile...
Job Details
Skills
- Artificial Intelligence
- Machine Learning (ML)
- Data Science
- LLM
Summary
Location: Atlanta, Houston, or Minneapolis (Hybrid)
Level: Senior to Principal
Engagement Type: Full-Time or C2C
Allocation: 100% Dedicated | ~60% Technical Assessment / ~40% Stakeholder Facilitation
Role Overview
We are seeking a Senior AI Consultant to join an active enterprise transformation engagement focused on evaluating and strengthening a large-scale AI portfolio.
This role operates in two primary modes:
Deep Technical Assessment – evaluating enterprise AI/ML systems, MLOps maturity, and production model quality.
Consultative Facilitation – serving as a technical bridge between AI program teams and executive stakeholders, informing AI scorecards, governance frameworks, and portfolio strategy.
This is not a narrowly scoped build role. It is a transformation-focused engagement requiring strong technical depth, structured thinking, and executive presence.
You will report into the AI Swimlane Lead and collaborate across multiple business units within a complex enterprise environment.
What You’ll Do
1. MLOps & Model Lifecycle Assessment (Enterprise-Level / Macro)
Evaluate end-to-end ML workflows across multiple AI programs and produce a maturity grading rubric tailored to a large enterprise.
You will assess:
- ML lifecycle processes: data ingestion, feature engineering, training, validation, deployment, monitoring
- MLOps tooling and patterns: experiment tracking, model registries, CI/CD for ML, feature stores, A/B testing infrastructure
- Governance and auditability: model cards, lineage tracking, reproducibility standards
- Organizational maturity frameworks (e.g., Google MLOps levels, ML Test Score) and adapt/build a custom rubric appropriate for a Fortune 20 environment
- Deliverable impact includes clear maturity scoring, risk identification, and practical recommendations for improvement.
2. Model-Level Technical Review (Deep-Dive / Micro)
Perform technical assessments on a shortlist of high-value production models.
You must be able to critically evaluate:
- Algorithm and architecture selection (classical ML, deep learning, transformer-based, ensemble methods)
- Fine-tuning and transfer learning approaches (including LLM/GenAI use cases where applicable)
- Training methodology: data splits, regularization, hyperparameter tuning, compute efficiency
- Feature engineering rigor and data pipeline integrity
- Model performance metrics in business context (e.g., precision/recall tradeoffs aligned to operational impact, not just accuracy scores)
- This requires hands-on applied ML experience and the ability to move beyond theoretical evaluation into practical enterprise constraints.
3. Consultative Facilitation & Governance Support
Act as a technical credibility layer within AI scorecard and governance discussions.
You will:
- Translate technical model performance into business-relevant language (e.g., model precision → call center ticket reduction → OPEX impact)
- Support scorecard taxonomy development by helping technical teams articulate measurable KPIs and data lineage
- Participate in stakeholder workshops with AI program leaders
- Present findings clearly to senior technical leaders and executive-adjacent audiences
- Build concise, executive-ready presentation materials summarizing assessment outcomes
- This role does not own scorecard deliverables, but materially informs them.
Ideal Background
- Several years of applied ML / data science experience
- Experience evaluating or auditing ML programs (internal platform team, ML consulting, enterprise architecture, or AI governance role)
- Comfortable operating in ambiguous, transformation-focused environments
- Strong communicator who can engage senior technical leaders without oversimplifying or hiding behind jargon
- Experience in large, multi-business-unit enterprises (telecom or similarly complex industries preferred but not required)
- Comfortable building and presenting executive-level decks summarizing technical findings
What This Role Is Not
- Not a full-stack engineering or production build role
- Not exclusively a GenAI/LLM specialist position — classical ML depth is equally valuable
- Not the primary owner of scorecard deliverables
Why This Role Is Unique
This is a rare opportunity to evaluate and influence AI maturity at scale within a large enterprise. You will operate at the intersection of technical depth, governance design, and executive advisory — shaping how AI is measured, governed, and improved across the organization.
- Dice Id: 10263014
- Position Id: 8883870
- Posted 14 days ago
Company Info
Improving is the leading IT consulting and software engineering company in North America. We help enterprises and organizations solve their most complex technology challenges through modern software development, technology consulting, agile training, and team augmentation services. Whether your business needs to understand the impact of a new initiative, deploy a new application, or partner with a trusted firm that can assimilate into your team, Improving is here to help! We are dedicated to educating and supporting your business each step of the way.
Similar Jobs
It looks like there aren't any Similar Jobs for this job yet.
Search all similar jobs