Databricks Engineer || 100% Remote ($50/hr on 1099)

Overview

Remote
Depends on Experience
Contract - Independent
Contract - W2
Contract - 6 month(s)
No Travel Required

Skills

Databricks
ETL
Spark
Azure

Job Details

Position               :: Databricks Engineer

Location               :: 100% Remote

Duration              :: 6+ Months

Interview              :: Phone and Video

Job Description:

We are seeking a skilled Mid-Level Databricks Engineer to join our data engineering team. The ideal candidate will play a key role in the migration of Hadoop workloads to the Databricks Lakehouse platform, contributing to the development, optimization, and monitoring of scalable and repeatable data solutions. You will work closely with cross-functional teams to ensure successful migration, enablement of advanced Databricks features, and ongoing performance improvements.

 

Key Responsibilities

•              Identify and categorize existing Hadoop jobs (ETL, batch, streaming) and data sources to support migration planning.

•              Participate in selecting scalable and repeatable migration use cases for Minimum Viable Product (MVP) initiatives.

•              Provision and configure Databricks workspaces, ensuring Lakehouse architecture and federation capabilities are enabled.

•              Execute pilot migrations of representative Hadoop workloads using tools such as Databricks Migration Accelerator or partner solutions.

•              Validate and monitor post-migration performance, cost efficiency, and data integrity.

•              Enable and test advanced Databricks features (e.g., Liquid Clustering, Lakehouse AI monitoring, Serverless warehouse, Unity Catalog) to optimize data layout and pipeline health.

•              Engage with data engineering, analytics, and governance teams to gather feedback and document learnings, blockers, and feature gaps.

•              Track and report on key success metrics such as migration time, query latency, cost savings, and feature adoption.

•              Contribute to the development of a phased roadmap for full-scale migration and advanced feature rollout.

Qualifications

•              Bachelor’s degree in Computer Science, Information Technology, or a related field, or equivalent professional experience.

•              5-8 years of experience in data engineering, with hands-on experience in Databricks and Hadoop ecosystems.

•              Proficiency in Spark (Scala or PySpark), ETL pipeline development, and data migration practices.

•              Experience with Lakehouse architecture, Delta Lake, and Databricks advanced features (e.g., Lakehouse Federation, Liquid Clustering, Unity Catalog) is a strong plus.

•              Solid understanding of cloud platforms (Azure) and data governance concepts.

•              Strong analytical and problem-solving skills with attention to detail.

•              Ability to work collaboratively in cross-functional teams and communicate effectively with both technical and non-technical stakeholders.

Preferred Skills

•              Experience with Databricks Migration Accelerator or similar migration tools.

•              Familiarity with synthetic data generation and pipeline health monitoring.

•              Understanding of performance tuning, query optimization, and cost management on cloud data platforms.

•              Ability to document migration patterns and contribute to best practices for broader adoption.

What We Offer

•              Opportunity to work with cutting-edge data technologies and drive impactful migration projects.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.