Databricks Data Engineer with DevOps

Hybrid in Los Angeles, CA, US • Posted 2 hours ago • Updated 2 hours ago
Contract W2
Contract Corp To Corp
Contract Independent
Hybrid
Depends on Experience
Fitment

Dice Job Match Score™

🔢 Crunching numbers...

Job Details

Skills

  • Databricks
  • Delta Lake
  • Unity
  • PySpark
  • SQL
  • Devops
  • AWS
  • GitLab OR Git
  • Terraform
  • CI/CD

Summary

Job Title: Databricks Data Engineer with DevOps

Location: Los Angeles CA (Hybrid)

Hire type: FTE / CTH

Job Summary

We are looking for an experienced Databricks Data Engineer with strong DevOps expertise to join our data engineering team. The ideal candidate will design, build, and optimize large-scale pipelines on the Databricks Lakehouse Platform on AWS, while driving automated CI/CD and deployment practices. This role requires strong skills in PySpark, SQL, AWS cloud services, and modern DevOps tooling. You will collaborate closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions.

Must Demonstrate (Critical Skills & Architectural Competencies)

Designing and implementing Databricks-based Lakehouse architectures on AWS

Clear separation of compute vs. serving layers

Ability to design low-latency data/API access strategies (beyond Spark-only patterns)

Strong understanding of caching strategies for performance and cost optimization

Data partitioning, storage optimization, and file layout strategy

Ability to handle multi-terabyte structured or time-series datasets

Skill in requirement probing, identifying what matters architecturally

A player-coach mind set: hands-on engineering + technical leadership

Key Responsibilities

Data Pipeline Development

Design, build, and maintain scalable ETL/ELT pipelines using Databricks on AWS.

Develop high-performance data processing workflows using PySpark/Spark and SQL.

Integrate data from Amazon S3, relational databases, and semi/nonstructured sources.

Implement Delta Lake best practices including schema evolution, ACID, OPTIMIZE, ZORDER, partitioning, and file-size tuning.

Ensure architectures support high-volume, multi-terabyte workloads.

DevOps & CI/CD

Implement CI/CD pipelines for Databricks using Git, GitLab, GitHub Actions, or AWS-native tools.

Build and manage automated deployments using Databricks Asset Bundles.

Manage version control for notebooks, workflows, libraries, and environment configuration.

Automate cluster policies, job creation, environment provisioning, and configuration management.

Support infrastructure-as-code via Terraform (preferred) or Cloud Formation.

Collaboration & Business Support

Work with data analysts and BI teams to prepare curated datasets for reporting and analytics.

Collaborate closely with product owners, engineering teams, and business partners to translate requirements into scalable implementations.

Document data flows, technical architecture, and DevOps/deployment workflows.

Performance & Optimization

Tune Spark clusters, workflows, and queries for cost efficiency and compute performance.

Monitor pipelines, troubleshoot failures, and maintain high reliability.

Implement logging, monitoring, and observability across workflows and jobs.

Apply caching strategies and workload optimization techniques to support low-latency consumption patterns.

Governance & Security

Implement and maintain data governance using Unity Catalog.

Enforce access controls, security policies, and data compliance requirements.

Ensure lineage, quality checks, and auditability across data flows.

Technical Skills

Strong hands-on experience with Databricks, including:

Delta Lake

Unity Catalog

Lakehouse Architecture

Delta Live Pipelines

Databricks Runtime

Table Triggers

Databricks Workflows

Proficiency in PySpark, Spark, and advanced SQL.

Expertise with AWS cloud services, including:

S3

IAM

Glue / Glue Catalog

Lambda

Kinesis (optional but beneficial)

Secrets Manager

Strong understanding of DevOps tools:

Git / GitLab

CI/CD pipelines

Databricks Asset Bundles

Familiarity with Terraform is a plus.

Experience with relational databases and data warehouse concepts.

Preferred Experience

Knowledge of streaming technologies like Structured Streaming/Spark Streaming.

Experience building real-time or near real-time pipelines.

Exposure to advanced Databricks runtime configurations and performance tuning.

Certifications (Optional)

Databricks Certified Data Engineer Associate / Professional

AWS Data Engineer or AWS Solutions Architect certification

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90989805
  • Position Id: 8907962
  • Posted 2 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Hybrid in Los Angeles, California

4d ago

Easy Apply

Contract

63 - 65

Los Angeles, California

12d ago

Easy Apply

Third Party

Los Angeles, California

6d ago

Easy Apply

Third Party, Contract

$50 - $60

Hybrid in Los Angeles, California

3d ago

Easy Apply

Contract

60 - 80

Search all similar jobs