Overview
Skills
Job Details
Senior Azure Databricks Data Engineer
Location: Seattle, WA
Skillset: Strong PySpark, Delta Lake, and cloud data platform expertise
Responsibilities:
* Design, build, and deploy data extraction, transformation, and loading processes and pipelines from various sources including databases, APIs, and data files.
* Develop and support data pipelines within a Cloud Data Platform, such as Databricks
* Build data models that reflect domain expertise, meet current business needs, and will remain flexible as strategy evolves
* Monitor and optimize Databricks cluster performance, ensuring cost-effective scaling and resource utilization
* Demonstrates ability to communicate technical concepts to non-technical audiences both in written and verbal form.
* Demonstrates strong understanding with coding and programming concepts to build data pipelines (e.g. data transformation, data quality, data integration, etc.).
* Demonstrates strong understanding of database storage concepts (data lake, relational databases, NoSQL, Graph, data warehousing).
* Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning
* Automate CI/CD pipelines for data workflows using Azure DevOps
* Collaborate with cross-functional teams to support data governance using Databricks Unity Catalog
Qualifications:
* 8+ years of experience in data engineering or a related field
* Expertise with programming languages such as Python/PySpark, SQL, or Scala
* Experience working in a cloud environment (Azure preferred) with strong understanding of cloud data architecture
* Hands-on experience with Databricks Cloud Data Platforms Required
* Experience with workflow orchestration (e.g., Databricks Jobs, or Azure Data Factory pipelines) Required