Azure Databricks Administrator

Overview

Remote
Depends on Experience
Contract - W2
Contract - Independent

Skills

ARM
Analytics
Apache Kafka
Apache Spark
Bash
Cloud Computing
Collaboration
Computer Networking
Continuous Delivery
Continuous Integration
Data Engineering
Data Governance
Data Processing
Databricks
DevOps
GRID
GitHub
Grafana
Management
Microservices
Microsoft Azure
Optimization
Orchestration
Performance Tuning
Python
Scripting
Storage
Terraform

Job Details

Position: Azure Databricks Administrator

Client Location: Boston, MA

Work Location: On-site Highly Preferred (Remote is OK for 100% match)

Duration: Long Term

Scope of Services:

  • Architect, configure, and optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment.
  • Set up and manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, and Networking.
  • Design and implement a monitoring and observability framework using tools like Azure Monitor, Log Analytics, and PrometheGrafana.
  • Collaborate with platform and data engineering teams to enable microservices-based architecture for scalable and modular data solutions.
  • Drive automation and CI/CD practices using Terraform, ARM templates, and GitHub Actions/Azure DevOps.

Required Skills:

  • Overall 9+ years of Data and Analytics experience
  • 5+ years of experience as Databricks Administration and cloud engineering.
  • Should know how to Automate, and Performance Tuning the Databricks
  • Able to Install, Configure, and Maintain Databricks clusters and workspaces.
  • 5+ years hands-on experience with DevOps tools.
  • Strong hands-on experience with Delta Lake, and Apache Spark.
  • Deep understanding of Azure services: Resource Manager, AKS, ACR, Key Vault, and Networking.
  • Proven experience in Microservices Architecture and Container orchestration.
  • Expertise in Infrastructure-as-code, scripting (Python, Bash), and DevOps tooling.
  • Familiarity with data governance, security, and cost optimization in cloud environments.
  • Experience with event-driven architectures (Kafka/Event Grid).
  • Knowledge of data mesh principles and distributed data ownership.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Cyma Systems Inc