Overview
On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Skills
Azure
Databricks
ADLS
Spark (Python)
SQL
ETL
Delta Lake
PostgreSQL
Data Architecture
Batch & Real-time Processing
Data Modelling
Job Details
Job Title: Azure Data Tech Lead
Location: Alpharetta, Georgia
"Core Skills: Azure, Databricks, ADLS, Spark (Python), SQL, ETL, Delta Lake, PostgreSQL, Data Architecture, Batch & Real-time Processing, Data Modelling
Key Responsibilities
- Architect, design, and implement scalable data platforms and pipelines on Azure and Databricks.
- Build and optimize data ingestion, transformation, and processing workflows across batch and real-time data streams.
- Work extensively with ADLS, Delta Lake, and Spark (Python) for large-scale data engineering.
- Lead the development of complex ETL/ELT pipelines, ensuring high quality, reliability, and performance.
- Design and implement data models, including conceptual, logical, and physical models for analytics and operational workloads.
- Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
- Define and enforce best practices in data governance, data quality, security, and architecture.
- Collaborate with architects, data scientists, analysts, and business teams to translate requirements into technical solutions.
- Troubleshoot production issues, optimize performance, and support continuous improvement of the data platform.
- Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
- 8+ years of hands-on data engineering experience in enterprise environments.
- Strong expertise in Azure services, especially Azure Databricks, Functions, and Azure Data Factory (preferred).
- Advanced proficiency in Apache Spark with Python (PySpark).
- Strong command over SQL, query optimization, and performance tuning.
- Deep understanding of ETL/ELT methodologies, data pipelines, and scheduling/orchestration.
- Hands-on experience with Delta Lake (ACID transactions, optimization, schema evolution).
- Strong experience in data modelling (normalized, dimensional, lakehouse modelling).
- Experience in both batch processing and real-time/streaming data (Kafka, Event Hub, or similar).
- Solid understanding of data architecture principles, distributed systems, and cloud-native design patterns.
- Ability to design end-to-end solutions, evaluate trade-offs, and recommend best-fit architectures.
- Strong analytical, problem-solving, and communication skills.
- Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
- Experience with CI/CD tools such as Azure DevOps and Git.
- Familiarity with IaC tools (Terraform, ARM).
- Exposure to data governance and cataloging tools (Azure Purview).
- Experience supporting machine learning or BI workloads on Data bricks.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.