Overview
Skills
Job Details
Responsibilities:
Design, develop, and maintain data pipelines and ETL processes using Azure Databricks and Azure Data Factory (ADF).
Implement scalable data solutions to support analytics and reporting requirements.
Work with structured and unstructured data to build clean, reliable, and optimized datasets.
Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
Monitor, debug, and optimize existing data workflows for performance and cost efficiency.
Ensure best practices in data governance, security, and compliance (especially relevant in the Healthcare domain).
Solve complex data engineering challenges with innovative solutions.
Requirements:
Mandatory: Expertise in Azure Databricks and Azure Data Factory (ADF).
Strong experience in PySpark, SQL, and big data processing.
Hands-on experience with ETL workflows, data integration, and data lake architectures.
Strong problem-solving skills and ability to troubleshoot performance issues.
Healthcare industry experience is highly preferred.
Excellent communication skills and ability to work in a collaborative environment.