Role - Azure Data Engineer Databricks
Experience Required - 6+ Years
Must Have Technical/Functional Skills
Design, build, and orchestrate ETL/ELT pipelines using Azure Databricks.
Implement batch data ingestion and transformations using PySpark and Spark SQL.
Architect and maintain Lakehouse and analytical warehouse models (fact and dimension schemas) leveraging Delta Lake.
Ensure data quality, reliability, lineage, and governance across the data platform.
Collaborate with security and platform teams to enforce data access controls and best practices.
Experience on CI/CD pipelines using Git for Databricks
Roles & Responsibilities
Act as the onsite lead for Azure Databricks data engineering initiatives.
Supervise and guide an offshore development team, assigning work, reviewing deliverables, and ensuring adherence to best practices.
Perform code reviews, design reviews, and solution walkthroughs for offshore-developed components.
Ensure alignment between offshore execution and onsite architecture, security, and business requirements.
Proactively identify risks, gaps, and improvement opportunities across the data platform.
Prior experience working in a hybrid onsite offshore delivery model
Generic Managerial Skills, If any
6-10 years of hands on experience in data engineering.
Strong, proven expertise in Azure Databricks and Delta Lake.
Advanced proficiency in SQL with a solid understanding of distributed data processing concepts.
Demonstrated experience leading, mentoring, or supervising data engineering teams.
Strong experience performing code reviews and solution design validations.
knowledge of data security, and access management (RBAC).
Preferred / Nice to Have Skills: Experience with Azure Synapse / Fabric, Azure Data Factory.