- Proven hands-on experience with Azure Databricks, including designing, building, and optimizing complex data pipelines and Delta Lake solutions.
- Strong background in Azure Data Lake Storage (ADLS)—architecture, best practices, and efficient handling of high-volume and diverse data sets.
- Previous experience designing and developing data models and integration layers that seamlessly pull and harmonize data from multiple ODS and other source systems.
- Demonstrated ability to deliver scalable, performant solutions for both batch and streaming workloads on Azure.
- Track record of driving process efficiency—targeting substantial improvements in workflow automation within the cloud ecosystem.
Familiarity with Azure suite: ADLS, Azure Data Factory, Azure Synapse Analytics, etc.
Commitment to data governance, quality management, and cost optimization in cloud environments.
Advanced coding skills in Python, SQL, and PySpark (Databricks context).
Nice to have: Experience developing CI/CD pipelines and leveraging Databricks Asset Bundles for deployment and versioning.
Ability to prepare well-documented, reusable solutions and foster knowledge transfer within the team.
Experience with additional cloud platforms (AWS, Google Cloud Platform) is a plus, though Azure expertise is paramount.