Databricks Data Engineer Lead

Overview

Remote
Depends on Experience
Contract - W2

Skills

databricks
azure
adf
azure data factory
data engineering
lead
modernization
adf orchestrartion
etl
etl pipelines

Job Details

Job Summary:

We are seeking a Senior Azure Data Engineer with strong hands-on expertise in Azure Data Factory (ADF), Databricks, and ETL pipeline development. This role will focus on the modernization of a legacy legal data system into a robust, cloud-native platform. The ideal candidate will not only implement complex data solutions end-to-end but also lead the coordination between onshore and offshore teams, working closely with architects to ensure scalable and high-performing outcomes.


Key Responsibilities:

  • Lead and perform hands-on implementation of advanced data pipelines using Azure Data Factory (ADF), Azure Databricks, and PySpark.

  • Convert legacy system components to modern Azure-based architectures.

  • Collaborate with data architects and modelers to design and implement scalable data models and transformation logic in the cloud.

  • Oversee data integration strategies, ensuring data quality, consistency, and reliability.

  • Coordinate and manage onshore-offshore teams, assigning tasks, reviewing code, and ensuring adherence to project timelines and standards.

  • Drive performance tuning and optimization across ETL workflows and Databricks notebooks.

  • Ensure alignment between technical execution and architectural vision, contributing to design decisions and roadmaps.

  • Implement re-usable components and orchestration frameworks for repeatable and scalable data workflows.

  • Participate in solution design reviews and provide input on data migration, governance, and security considerations.


Required Qualifications:

  • 7+ years of experience in data engineering with a focus on cloud-based solutions, particularly on Azure.

  • Strong, hands-on proficiency in:

    • Azure Data Factory (ADF)

    • Azure Databricks

    • ETL/ELT frameworks

    • PySpark and Python

  • Solid understanding of data modeling, data integration, and performance optimization in large-scale environments.

  • Proven track record of leading and coordinating global delivery teams (onshore/offshore) on enterprise data projects.

  • Deep experience in building, testing, and deploying complex data pipelines in production.

  • Strong communication skills with the ability to interface effectively with business stakeholders, architects, and developers.


Preferred Qualifications:

  • Exposure to or experience with Supply Chain Management (SCM) data is a plus.

  • Familiarity with CI/CD pipelines and DevOps practices in a data engineering context.

  • Understanding of data governance, security frameworks, and compliance in Azure.


Personal Attributes:

  • Proactive, hands-on problem solver with the ability to execute independently and within teams.

  • Strong leadership skills and the ability to mentor and guide junior resources.

  • Adaptable and comfortable working in fast-paced environments with evolving priorities.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.