Role: Azure Databricks
Location: Dallas, TX
Key Responsibilities
Develop and maintain scalable ETL/ELT pipelines using Azure Databricks (PySpark/Scala/Spark SQL).
Work with Azure Data Lake, Azure Data Factory, Delta Lake, and other Azure data services for data ingestion and processing.
Optimize Spark jobs for performance, cost efficiency, and reliability.
Implement Delta Lake architectures, data quality checks, and data versioning.
Collaborate with Data Architects, Analysts, and BI teams to design end-to-end data solutions.
Monitor, troubleshoot, and improve existing data pipelines and data workflows.
Ensure data security, compliance, and adherence to Azure best practices.
Participate in code reviews, documentation, and continuous improvement activities.
Required Skills
Strong hands-on experience with Azure Databricks and Apache Spark.
Good expertise in PySpark / Scala / SQL.
Experience with Azure Data Lake (ADLS), Azure Data Factory (ADF), Azure Synapse, and related tools.
Knowledge of Delta Lake, data modelling, and data partitioning strategies.
Experience in building large-scale data pipelines in cloud environments.