Senior Data Engineer - Azure & Databricks
Industry: Technology Consulting / Digital Transformation
Role Level: Lead II - Software Engineering
Location: Alpharetta, Georgia (USA)
Employment Type: Full-Time
About the Role:
We are seeking an experienced Senior Data Engineer - Azure & Databricks with a proven track record (8+ years) of designing, building, and scaling modern data platforms within the Azure ecosystem.
This role is ideal for someone who thrives in complex enterprise environments, is deeply hands-on with Databricks and Spark, and can lead end-to-end data engineering initiatives from architecture to implementation.
You will be responsible for building robust, high-performance data pipelines, establishing engineering best practices, and collaborating with cross-functional teams to deliver reliable, scalable data solutions across both batch and real-time streams.
Key Responsibilities:
- Architect, design, and implement scalable data platforms and pipelines using Azure and Databricks.
- Build and optimise ingestion, transformation, and processing workflows for batch and real-time (streaming) use cases.
- Work extensively with ADLS, Delta Lake, and Spark (Python/PySpark) to enable large-scale data engineering capabilities.
- Lead the development of complex ETL/ELT pipelines, ensuring performance, reliability, and code quality.
- Design conceptual, logical, and physical data models to support analytics and operational workloads.
- Work with relational and lakehouse systems, including PostgreSQL and Delta Lake.
- Define, implement, and enforce best practices for data governance, security, quality, and architecture standards.
- Partner closely with architects, data scientists, analysts, and business teams to translate requirements into scalable technical solutions.
- Troubleshoot production issues, drive performance optimisation, and support continuous platform improvements.
- Mentor junior engineers and contribute to the creation of reusable components and engineering standards.
Required Qualifications:
- 8+ years of hands-on data engineering experience in enterprise environments.
- Strong expertise in Azure services, particularly Azure Databricks, Azure Functions, and (preferably) Azure Data Factory.
- Advanced proficiency in Apache Spark with Python (PySpark).
- Strong SQL skills, including query optimisation and performance tuning.
- Deep experience with ETL/ELT methodologies, scheduling, and data orchestration.
- Hands-on expertise with Delta Lake (ACID, schema evolution, performance tuning).
- Strong understanding of data modelling (normalised, dimensional, lakehouse).
- Proven experience with batch and streaming technologies such as Kafka or Event Hub.
- Solid grasp of data architecture, distributed systems, and cloud-native design patterns.
- Ability to design and evaluate end-to-end technical solutions and recommend best-fit architectures.
- Excellent analytical, problem-solving, and communication skills.
- Ability to collaborate across teams and lead technical discussions.
Preferred Skills:
- Experience with CI/CD tools such as Azure DevOps and Git.
- Familiarity with Infrastructure-as-Code (Terraform, ARM).
- Exposure to data governance and metadata cataloguing tools (e.g., Azure Purview).
- Experience supporting machine learning or BI workloads on Databricks.
Benefits:
For Full-Time, Regular Employees
- Minimum 10 days paid vacation annually
- 6 days paid sick leave (prorated for new hires)
- 10 paid holidays
- Paid bereavement and jury duty leave
- Eligibility for 401(k) Retirement Plan with employer match
- Medical, dental, and vision insurance eligibility (employee + dependents)
- Company-paid:
- Basic life insurance
- Accidental death & disability coverage
- Short- and long-term disability
- Access to HSA and FSA programs (healthcare, dependent care, commuting)