ADB Technical Lead Azure Databricks / Python / Spark Streaming

Overview

On Site
$60 - $70
Accepts corp to corp applications
Contract - Independent
Contract - W2

Skills

MongoDB
Apache Kafka
Data Engineering
Databricks
GitHub
IBM DB2
Python
NoSQL
Terraform
PostgreSQL

Job Details

Hi,
Hope you are doing well.
Job Title: ADB Technical Lead Azure Databricks / Python / Spark Streaming
Location: Pleasanton, CA

Job Type: Contracting

Department: Data Engineering / Cloud Analytics


About the Role:
We are seeking a hands-on and technically strong Azure Databricks (ADB) Technical Lead to architect and drive the development of large-scale batch and streaming data pipelines on Azure Cloud. This role requires expertise in Python, Databricks Notebooks, Apache Spark (including Structured Streaming), and real-time integration with Kafka.
You will work with both relational databases like DB2 and NoSQL systems such as MongoDB, with a strong focus on performance optimization and scalable architecture. The ideal candidate excels at leading technical teams, mentoring others, and communicating effectively across business and technology stakeholders.
Key Responsibilities:
Lead the design and development of real-time and batch data pipelines using Azure Databricks, Apache Spark, and Structured Streaming.
Build and optimize data ingestion and processing workflows using Kafka and Databricks streaming for high-throughput, low-latency applications.
Write efficient Python code in Databricks Notebooks, integrating with various data sources and destinations.
Work with DB2, MongoDB, and other enterprise-grade data systems to unify data sources and ensure high-quality analytics.
Focus on performance tuning of Spark jobs, resource optimization, and cost-effective compute usage on Azure.
Collaborate with platform and architecture teams to ensure secure, scalable, and maintainable cloud data infrastructure.
Support CI/CD for Databricks pipelines and notebooks, using tools like GitHub, Azure DevOps, and Infrastructure as Code.
Mentor junior engineers and contribute to a high-performing, quality-driven engineering culture.
Interface with product owners, data scientists, and business analysts to turn data requirements into production-ready pipelines.
Required:
15+ years of experience in data engineering, with 7+ years in a technical leadership role.
Deep hands-on experience with Azure Databricks and Apache Spark, including Structured Streaming.
Proficiency in Python for building scalable and reusable data workflows.
Experience with Kafka for real-time data ingestion and event streaming.
Strong experience integrating with relational (DB2, PostgreSQL) and NoSQL (MongoDB) databases.
Demonstrated expertise in performance tuning and scaling Spark jobs in production environments.
Strong communication and analytical skills, with the ability to work cross-functionally and lead through influence.
Preferred:
Experience with Java or Scala in Spark streaming or Kafka connector development.
Familiarity with Azure services like Data Lake, Data Factory, Synapse, and Event Hubs.
Knowledge of CI/CD pipelines, GitHub workflows, and Infrastructure as Code (e.g., Terraform).
Background in building data platforms in regulated or large-scale enterprise environments.
Thanks & Regards
Nazeer Shaik
IT Recruiter Unicorn Technologies LLC
Email:
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.