Data Engineer (Manager)

  • Chicago, IL
  • Posted 6 hours ago | Updated 6 hours ago

Overview

Remote
On Site
Full Time

Skills

Innovation
IoT
Trading
Distribution
Shipping
Professional Development
Quality Assurance
Project Planning
Resource Allocation
Extraction
Data Architecture
Continuous Integration
Continuous Delivery
Automated Testing
Partnership
Business Development
Technical Direction
Ad Hoc Reporting
Value Engineering
Extract
Transform
Load
ELT
Coaching
Mentorship
Code Review
Performance Management
PySpark
SQL
Testing
Documentation
Apache HTTP Server
Workflow
Apache Airflow
Microsoft
Scheduling
Management
Orchestration
Data Modeling
Dimensional Modeling
Normalization
Use Cases
Communication
Customer Relationship Management (CRM)
Computer Science
Mathematics
Training
Vector Databases
Real-time
Apache Kafka
Apache Spark
Streaming
Apache Flink
Change Data Capture
Microsoft Certified Professional
Artificial Intelligence
Data Integration
Data Quality
SODA
Monte Carlo Method
Data Governance
Unity
Python
Data Processing
Cloud Computing
Snow Flake Schema
Databricks
Microsoft Azure
Amazon Web Services
Data Analysis
Open Source
Data Engineering
Collaboration
Financial Services
Manufacturing
Energy
Investments
Dashboard
Pricing
Analytics
Data Science
Machine Learning (ML)
IT Management
Leadership

Job Details

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation and navigate constant change. Through a combination of strategy, expertise and creativity, we help clients accelerate operational, digital and cultural transformation, enabling the change they need to own their future.

Join our team as the expert you are now and create your future.

Huron is a global consultancy that collaborates with clients to drive strategic growth, ignite innovation, and navigate constant change. We're seeking a Data Engineering Manager to join the Data Science & Machine Learning team in our Commercial Digital practice, where you'll lead the design, development, and delivery of data infrastructure that powers intelligent systems across Financial Services, Manufacturing, Energy & Utilities, and other commercial industries.

Managers play a vibrant, integral role at Huron. Their invaluable knowledge reflects in the projects they manage and the teams they lead. Known for building long-standing partnerships with clients, they collaborate with colleagues to solve their most important challenges. Our Managers also spend significant time mentoring junior staff on the engagement team-sharing expertise, feedback, and encouragement. This promotes a culture of respect, unity, collaboration, and personal achievement.

This isn't a maintenance role or a ticket queue-you'll own the full data lifecycle from source integration through analytics-ready delivery, while leading and developing a team of data engineers. You'll build systems that matter: real-time data architectures that feed mission-critical ML models, transformation layers that turn messy enterprise data into trusted datasets, and orchestration systems that ensure reliability at scale. Our clients are Fortune 500 companies looking for partners who can engineer and lead, not just advise.

The variety is real. In your first year, you might lead a lakehouse implementation for a global manufacturer's IoT data, oversee a real-time streaming architecture for a financial services firm's trading analytics, and architect a data mesh strategy for a utility company's distribution systems-all while developing the next generation of data engineering talent at Huron. If you thrive on solving complex data challenges, shipping production systems, and building high-performing teams, this role is for you.

What You'll Do
  • Lead and mentor junior data engineers-provide technical guidance, conduct code reviews, and support professional development. Foster a culture of continuous learning and high-quality engineering practices within the team.
  • Manage complex multi-workstream data engineering projects-oversee project planning, resource allocation, and delivery timelines. Ensure projects meet quality standards and client expectations while maintaining technical excellence.
  • Design and architect end-to-end data solutions-from source extraction and ingestion through transformation, quality validation, and delivery. Make key technical decisions and own the overall data architecture.
  • Lead development of modern data transformation layers using dbt-implementing modular SQL models, testing frameworks, documentation, and CI/CD practices that ensure data quality and maintainability at scale.
  • Architect lakehouse solutions using open table formats (Delta Lake, Apache Iceberg) on Microsoft Fabric, Snowflake, and Databricks-designing schemas, optimizing performance, and implementing governance frameworks.
  • Establish DataOps best practices-define and implement CI/CD pipelines for data assets, data quality monitoring, observability, lineage tracking, and automated testing standards to ensure data infrastructure remains reliable in production.
  • Serve as a trusted advisor to clients-build long-standing partnerships, understand business problems, translate data requirements into technical solutions, and communicate architecture decisions to both technical and executive audiences.
  • Contribute to business development-participate in business development activities, develop reusable assets and methodologies, and help shape the technical direction of Huron's data engineering capabilities.

Required Qualifications
  • 5+ years of hands-on experience building and deploying data pipelines in production-not just ad-hoc queries and exports. You've built ETL/ELT systems that run reliably, scale, and are maintained over time.
  • Experience leading and developing technical teams-including coaching, mentorship, code review, and performance management. Demonstrated ability to build high-performing teams and develop junior talent.
  • Strong SQL and Python programming skills with deep experience in PySpark for distributed data processing. SQL for analytics and data modeling; Python/PySpark for pipeline development and large-scale transformations.
  • Experience building data pipelines that serve AI/ML systems, including feature engineering workflows, vector embeddings for retrieval-augmented generation (RAG), and data quality frameworks that ensure model reproducibility. Familiarity with emerging agent integration standards such as MCP (Model Context Protocol) and A2A (Agent-to-Agent), and the ability to design data services and APIs that can be discovered and consumed by autonomous AI agents.
  • Experience with modern data transformation tools, dbt particularly. You understand modular SQL development, testing, documentation practices, and how to implement these at scale across teams.
  • Experience with cloud data platforms and lakehouse architectures-Snowflake, Databricks, Microsoft Fabric, and familiarity with open table formats (Delta Lake, Apache Iceberg). We're platform-flexible but Microsoft-preferred.
  • Proficiency with workflow orchestration tools such as Apache Airflow, Dagster, Prefect, or Microsoft Data Factory. You understand DAGs, scheduling, dependency management, and how to design reliable orchestration at scale.
  • Solid foundation in data modeling concepts: dimensional modeling, data vault, normalization/denormalization, and understanding of when different approaches are appropriate for different use cases.
  • Excellent communication and client management skills-ability to communicate technical concepts to non-technical stakeholders, lead client meetings, and build trusted relationships with executive audiences.
  • Bachelor's degree in Computer Science, Engineering, Mathematics, or related technical field (or equivalent practical experience).
  • Willingness to travel approximately 30% to client sites as needed.

Preferred Qualifications
  • Experience in Financial Services, Manufacturing, or Energy & Utilities industries.
  • Background in building data infrastructure for ML/AI systems-feature stores (Feast, Databricks Feature Store), training data pipelines, vector databases for RAG/LLM workloads, or model serving architectures.
  • Experience with real-time and streaming data architectures using Kafka, Spark Streaming, Flink, or Azure Event Hubs, including CDC patterns for data synchronization.
  • Familiarity with MCP (Model Context Protocol), A2A (Agent-to-Agent), or similar standards for AI system data integration.
  • Experience with data quality and observability frameworks such as Great Expectations, Soda, Monte Carlo, or dbt tests at enterprise scale.
  • Knowledge of data governance, cataloging, and lineage tools (Unity Catalog, Purview, Alation, or similar).
  • Experience with high-performance Python data tools such as Polars or DuckDB for efficient data processing.
  • Cloud certifications (Snowflake SnowPro, Databricks Data Engineer, Azure Data Engineer, or AWS Data Analytics).
  • Consulting experience or demonstrated ability to work across multiple domains and adapt quickly to new problem spaces.
  • Contributions to open-source data engineering projects or active participation in the dbt/data community.
  • Master's degree or PhD in a technical field.

Why Huron

Variety that accelerates your growth. In consulting, you'll work across industries and data architectures that would take a decade to encounter at a single company. Our Commercial segment spans Financial Services, Manufacturing, Energy & Utilities, and more-each engagement is a new data ecosystem to master and a new platform to ship.

Impact you can measure. Our clients are Fortune 500 companies making significant investments in data infrastructure. The pipelines you build will power real decisions-the ML models that drive production schedules, the dashboards that inform pricing strategies, the data products that enable self-service analytics. You'll see your work become the foundation others build on.

A team that builds. Huron's Data Science & Machine Learning team is a close-knit group of practitioners, not just advisors. We write code, build pipelines, and deploy platforms. You'll work alongside engineers and data scientists who understand the craft and push each other to improve.

Investment in your development. We provide resources for continuous learning, conference attendance, and certification. As our DSML practice grows, there's significant opportunity to take on technical leadership, shape our capabilities, and advance to senior leadership roles.

Position Level
Manager

Country
United States of America
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.