Databricks Administrator

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - 6 Month(s)
No Travel Required

Skills

Databricks
Data Modeling
Python
Terraform

Job Details

  • Develop and Implement Reference Administrate: Set up Databricks, Create DR for Databricks Design, build, and maintain a library of production-quality reference architectures and reusable patterns that showcase best practices and accelerate development for engineering teams.
  • Administrate and Prototype Solutions: Architect and build proofs-of-concept for end-to-end solutions on the Databricks Lakehouse Platform, actively demonstrating feasibility and validating complex designs through hands-on implementation.
  • Advise Through Doing: Serve as the primary consultant for engineering teams on all aspects of Databricks, providing expert guidance that extends beyond diagrams to include code, best practices, and hands-on support.
  • Lead Platform Training: Create training sessions and train engineers, leading the adoption and implementation of new features such as Unity Catalog, Delta Live Tables, and advanced MLOps capabilities.
  • Establish and Govern Best Practices: Define, document, and evangelize standards for Databricks development, including data modeling, performance tuning, security, and cost management.
  • Mentor and Coach: Mentor engineers and other technical staff through code reviews, paired programming, and design sessions, elevating the overall technical proficiency of the organization within the Databricks ecosystem.

Technical Expertise:

  • Databricks Mastery: Deep, expert-level knowledge of the Databricks Platform, including:
    • Unity Catalog: Designing and implementing data governance and security.
    • Delta Lake & Delta Live Tables: Architecting and building reliable, scalable data pipelines.
    • Performance & Cost Optimization: Expertise in tuning Spark jobs, optimizing cluster usage, and managing platform costs.
    • MLOps: Strong, practical understanding of the machine learning lifecycle on Databricks using tools like MLflow.
    • Databricks SQL: Knowledge of designing and optimizing analytical workloads.
    • Mosaic AI: Knowledge of designing and optimizing AI Agents.
  • Cloud & Infrastructure: Deep knowledge of cloud architecture and services on AWS. Strong command of Infrastructure as Code (Terraform, YAML).
  • Data Engineering & Programming: Strong background in data modeling, ETL/ELT development, and advanced, hands-on programming skills in Python and SQL.
  • CI/CD & Automation: Experience with designing and implementing CI/CD pipelines (preferably with GitHub Actions) for data and ML workloads.
  • Observability: Familiarity with implementing monitoring, logging, and alerting for data platforms.
  • Automation: The platform is ephemeral, and all changes are implemented using Terraform and Python. Expertise in Terraform and Python is a must.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.