Azure Databricks Engineer (Pyspark)

  • Louisville, KY
  • Posted 2 days ago | Updated 2 days ago

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2

Skills

PySpark
Python
Databricks
Workflow
Microsoft Azure
Continuous Delivery
Continuous Integration
ELT
Data Warehouse
SQL
Data Lake

Job Details

Key Responsibilities:

  • Design, develop, and optimize robust ETL/ELT pipelines using Databricks Workflows, Python and SQL.
  • Write clean, efficient, and reusable code in SQL & Python, Pyspark for data transformation and automation.
  • Collaborate with business stakeholders to understand data needs and deliver high-quality solutions.
  • Ensure data quality, integrity, and governance across all data platforms.
  • Monitor and troubleshoot data workflows and performance issues.

Required Qualifications:

  • 5+ years of experience in data engineering or a similar role.
  • Proficiency in SQL, Python, Pyspark and Automation.
  • Hands-on experience with Databricks, Snowflake, and Azure cloud services (e.g., Azure Data Lake, Azure Data Factory, Azure Functions).
  • Strong understanding of data modeling, data warehousing, and ETL/ELT best practices.
  • Experience with version control and CI/CD practices.
  • Strong problem-solving skills and ability to work independently in a fast-paced environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.