Overview
Remote
Depends on Experience
Contract - W2
No Travel Required
Skills
Amazon Web Services
Apache Kafka
Apache Spark
Cloud Computing
Collaboration
Communication
Continuous Delivery
Continuous Integration
Customer Facing
Data Architecture
Data Engineering
Data Governance
Databricks
DevOps
Electronic Health Record (EHR)
Good Clinical Practice
Google Cloud Platform
Innovation
Leadership
Lifecycle Management
Machine Learning (ML)
Machine Learning Operations (ML Ops)
Mentorship
Microsoft Azure
Migration
Python
SQL
Scalability
Stakeholder Engagement
Streaming
Unity
Workflow
Job Details
Lead Azure Data Engineer
Remote
What s In It For You
This is a high-impact leadership role with visibility across cloud, ML, and data architecture initiatives for a national enterprise client. You ll contribute to mission-critical modernizations and be empowered to influence long-term strategy and technical execution.
What You Get To Do
- Architect, build, and scale modern data pipelines and machine learning workflows using Databricks, Delta Lake, and MLflow.
- Lead complex Spark job migrations from platforms like EMR to Databricks, optimizing for scalability, cost-efficiency, and maintainability.
- Collaborate cross-functionally with cloud engineers, DevOps teams, and business stakeholders to align technical solutions with evolving enterprise needs.
- Serve as a technical consultant and thought partner to client leadership on best practices in MLOps, data transformation, and model deployment.
- Lead and mentor a team of data engineers, fostering a culture of innovation and excellence in delivery.
- Implement data governance frameworks, including Unity Catalog, to ensure secure and compliant data usage.
- Champion modern DevOps/CI-CD practices to support reliable, automated data workflows and ML model lifecycle management.
What You Need To Succeed
- 7+ years of data engineering experience, including at least 2 years in a leadership or consulting capacity.
- Deep hands-on expertise in Databricks, including Delta Lake, MLflow, Job Workflows, and Unity Catalog.
- Proven success migrating and optimizing Spark workloads from EMR or other platforms to Databricks.
- Strong grasp of MLOps, with experience building and deploying ML models in production.
- Solid cloud engineering background in AWS, Azure, or Google Cloud Platform.
- Proficiency in Python, SQL, and Spark.
- Familiarity with streaming technologies (e.g., Kafka, Delta Live Tables) and medallion data architecture.
- Excellent stakeholder engagement and client-facing communication skills.
- Consulting mindset with the ability to lead complex initiatives in dynamic, fast-paced environments.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.