Overview
Skills
Job Details
Hi
please find JD Below
Must have Skills Azure ,Azure Data Factory(ADF), Pyspark, ETL
Job Description
Azure Databricks Data Engineer (Remote)
Position Overview
We are seeking an experienced Azure Databricks Data Engineer with strong expertise in PySpark, SQL, and ETL pipelines. This is a remote position and requires candidates who are s. Prior healthcare industry experience is highly preferred. The ideal candidate will design, build, and optimize scalable data solutions on Azure, ensuring high performance, reliability, and data quality.
Key Responsibilities
Design, develop, and maintain data pipelines using Azure Databricks and PySpark.
Build scalable and efficient ETL workflows to support analytics, reporting, and operational needs.
Orchestrate data workflows using Apache Airflow.
Develop and optimize SQL queries for data transformation, validation, and integration.
Collaborate with cross-functional teams to gather requirements and translate them into technical solutions.
Ensure data quality, integrity, and compliance with healthcare data standards.
Troubleshoot and resolve issues in data pipelines and ETL processes.
Implement best practices for performance tuning, optimization, and cost management in Azure.
Required Skills & Qualifications
Strong hands-on experience with Azure Databricks.
Proficiency in PySpark for large-scale data processing.
Experience with Airflow for workflow orchestration.
Strong SQL skills for data analysis and transformation.
Proven experience building and maintaining ETL pipelines.
Healthcare industry experience (required or highly preferred).
Familiarity with Azure services such as ADLS, Azure Data Factory, Azure SQL, etc.
Strong analytical, troubleshooting, and communication skills.
Preferred Qualifications
Experience working with HIPAA-compliant data environments.
Knowledge of data modeling and performance optimization techniques.
Experience implementing CI/CD pipelines for data workloads.