Databricks Architect – Job Description
Role Summary
We are seeking a seasoned Databricks Architect (15+ years total experience) to lead the architecture, design, and development of scalable ETL/ELT pipelines across distributed data environments. This role owns Databricks workspace administration, cluster policies, performance optimization, data governance with Unity Catalog, and DevOps automation—partnering closely with platform, infrastructure, and application teams to deliver secure, cost‑effective, and high‑performing data solutions.
Work you’ll do
· Lead the architecture, design, and development of scalable ETL/ELT pipelines using Databricks, PySpark, and SQL across distributed data environments.
· Collaborate with platform and infrastructure teams to define Databricks architecture strategy and ensure secure, scalable, and cost‑effective implementation.
· Define and enforce cluster policies for proper resource utilization, autoscaling, cost control, and role‑based access aligned to workload patterns and team requirements.
· Lead performance tuning of Spark jobs, Databricks SQL queries, and notebooks—optimizing execution plans, partitions, caching, and I/O to minimize latency and cost.
· Develop optimized Databricks SQL queries, views, and materializations to power (a) Tableau dashboards, (b) React and .NET‑based applications via REST APIs, and (c) ad‑hoc and real‑time analytics use cases.
· Work closely with frontend and backend teams to deliver use‑case‑specific, query‑optimized datasets (Delta Lake, Lakehouse patterns).
· Leverage Unity Catalog for fine‑grained access control, data lineage, data discovery, and metadata governance across workspaces and catalogs.
· Drive DevOps best practices using Azure DevOps/Git and CI/CD pipelines for jobs, notebooks, libraries, and infrastructure.
· Establish observability for data pipelines (logging, metrics, lineage, alerts) and define SLOs/SLA for critical workloads.
· Mentor junior engineers and conduct architectural/design reviews to ensure consistency with best practices
· and reference architectures.
Qualifications
Required:
· 15+ years overall industry experience
· 7+ years in data engineering with strong background in cloud‑native data architecture.
· Deep hands‑on experience with Databricks architecture, workspace administration, and cluster/pool management.
· Experience defining and managing cluster policies, pools, autoscaling strategies, and job compute routing.
· Strong knowledge of Spark performance tuning and job optimization (shuffle, skew, AQE, partitioning, caching, broadcast).
· Proven expertise in Databricks SQL, PySpark, Delta Lake, and large‑scale batch/streaming data pipelines.
· Skilled in building reusable Python libraries with Pandas, Openpyxl, XlsxWriter, and PySpark.
· Practical experience with Unity Catalog for security, governance, lineage, and data discovery.
· Strong collaboration experience with front‑end/back‑end development teams and backend integration via REST APIs.
· Strong SQL expertise; hands‑on experience with PostgreSQL, SQL Server, or similar RDBMS.
· DevOps expertise with Azure DevOps, Git, and pipeline automation (CI/CD for jobs, notebooks, libraries, infra).
· Excellent communication skills; ability to lead technical discussions with cross‑functional teams and stakeholders.
Preferred:
· Experience with Azure (preferred) and/or AWS for cloud data services (storage, compute, networking).
· Data modeling (dimensional/semantic), query acceleration techniques, cache/materialization strategies.
· Cost optimization/FinOps for Databricks (cluster sizing, job routing, spot/Photon, serverless where applicable).
· Observability for data pipelines (logging, metrics, dashboards) and incident/runbook practices.
· Any experience with Microsoft Fabric
Technical Stack
· Cloud Platforms: Azure
· Big Data & Analytics: Databricks, PySpark, Delta Lake, Databricks SQL
· Programming & Frameworks: Python, Pandas, PySpark, Flask
· Visualization & BI: Power BI
· App Integration: React, .NET, REST APIs
· DevOps & CI/CD: Azure DevOps, Git
· Databases: Databricks SQL, Azure SQL DB, SQL Server or similar