DevOps/Python Engineer (Databricks and DBT)

Overview

On Site
140,000 - 160,000
Full Time
No Travel Required
Unable to Provide Sponsorship

Skills

Databricks
Amazon Redshift
DevOps
Python
SQL
Druid

Job Details

This role open in Sunnyvale & Austin locations.

Key Responsibilities:

  • Design, implement, and maintain highly available and scalable data pipelines leveraging tools such as Druid, Databricks, dbt, and Amazon Redshift
  • Manage and optimize distributed data systems for real-time, batch, and analytical workloads
  • Develop custom scripts and applications using programming languages (Python, Scala, or Java) to enhance data workflows and automation
  • Implement automation for deployment, monitoring, and alerting of data workflows
  • Work with cloud platforms (AWS/Azure/Google Cloud Platform) to provision and maintain data infrastructure
  • Apply CI/CD and Infrastructure-as-Code (IaC) principles to data workflows

Required Skills & Experience:

  • 5+ years of experience in DataOps, Data Engineering, DevOps Engineering, or related roles
  • Strong hands-on experience with Druid, Databricks, dbt, and Redshift (experience with Snowflake, BigQuery, or similar is a plus)
  • Solid understanding of distributed systems architecture and data infrastructure at scale
  • Proficiency in SQL and strong programming skills in at least one language (Python, Scala, or Java)
  • Experience with orchestration tools (Airflow, Dagster, Prefect, etc.)
  • Familiarity with cloud-native services on AWS, Azure, or Google Cloud Platform
  • Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins, etc.)
  • Strong problem-solving, debugging, and performance-tuning skills
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.