Senior Data Engineer

• Posted 2 hours ago • Updated 2 hours ago
Full Time
On-site
Fitment

Dice Job Match Score™

🔢 Crunching numbers...

Job Details

Skills

  • IT Consulting
  • Software Engineering
  • Workflow
  • Real-time
  • Use Cases
  • Analytics
  • PostgreSQL
  • Mentorship
  • Data Engineering
  • Apache Spark
  • Python
  • PySpark
  • SQL
  • Extract
  • Transform
  • Load
  • ELT
  • Scheduling
  • Orchestration
  • Performance Tuning
  • Data Modeling
  • Streaming
  • Apache Kafka
  • Data Architecture
  • Cloud Computing
  • Design Patterns
  • Analytical Skill
  • Conflict Resolution
  • Problem Solving
  • Communication
  • Collaboration
  • Continuous Integration
  • Continuous Delivery
  • DevOps
  • Git
  • Terraform
  • ARM
  • Data Governance
  • Meta-data Management
  • Microsoft Azure
  • Machine Learning (ML)
  • Business Intelligence
  • Databricks
  • Insurance
  • Life Insurance
  • FSA
  • Health Care

Summary

Senior Data Engineer - Azure & Databricks

Industry: Technology Consulting / Digital Transformation

Role Level: Lead II - Software Engineering

Location: Alpharetta, Georgia (USA)

Employment Type: Full-Time

About the Role:

We are seeking an experienced Senior Data Engineer - Azure & Databricks with a proven track record (8+ years) of designing, building, and scaling modern data platforms within the Azure ecosystem.

This role is ideal for someone who thrives in complex enterprise environments, is deeply hands-on with Databricks and Spark, and can lead end-to-end data engineering initiatives from architecture to implementation.

You will be responsible for building robust, high-performance data pipelines, establishing engineering best practices, and collaborating with cross-functional teams to deliver reliable, scalable data solutions across both batch and real-time streams.

Key Responsibilities:
  • Architect, design, and implement scalable data platforms and pipelines using Azure and Databricks.
  • Build and optimise ingestion, transformation, and processing workflows for batch and real-time (streaming) use cases.
  • Work extensively with ADLS, Delta Lake, and Spark (Python/PySpark) to enable large-scale data engineering capabilities.
  • Lead the development of complex ETL/ELT pipelines, ensuring performance, reliability, and code quality.
  • Design conceptual, logical, and physical data models to support analytics and operational workloads.
  • Work with relational and lakehouse systems, including PostgreSQL and Delta Lake.
  • Define, implement, and enforce best practices for data governance, security, quality, and architecture standards.
  • Partner closely with architects, data scientists, analysts, and business teams to translate requirements into scalable technical solutions.
  • Troubleshoot production issues, drive performance optimisation, and support continuous platform improvements.
  • Mentor junior engineers and contribute to the creation of reusable components and engineering standards.

Required Qualifications:
  • 8+ years of hands-on data engineering experience in enterprise environments.
  • Strong expertise in Azure services, particularly Azure Databricks, Azure Functions, and (preferably) Azure Data Factory.
  • Advanced proficiency in Apache Spark with Python (PySpark).
  • Strong SQL skills, including query optimisation and performance tuning.
  • Deep experience with ETL/ELT methodologies, scheduling, and data orchestration.
  • Hands-on expertise with Delta Lake (ACID, schema evolution, performance tuning).
  • Strong understanding of data modelling (normalised, dimensional, lakehouse).
  • Proven experience with batch and streaming technologies such as Kafka or Event Hub.
  • Solid grasp of data architecture, distributed systems, and cloud-native design patterns.
  • Ability to design and evaluate end-to-end technical solutions and recommend best-fit architectures.
  • Excellent analytical, problem-solving, and communication skills.
  • Ability to collaborate across teams and lead technical discussions.

Preferred Skills:
  • Experience with CI/CD tools such as Azure DevOps and Git.
  • Familiarity with Infrastructure-as-Code (Terraform, ARM).
  • Exposure to data governance and metadata cataloguing tools (e.g., Azure Purview).
  • Experience supporting machine learning or BI workloads on Databricks.

Benefits:

For Full-Time, Regular Employees
  • Minimum 10 days paid vacation annually
  • 6 days paid sick leave (prorated for new hires)
  • 10 paid holidays
  • Paid bereavement and jury duty leave
  • Eligibility for 401(k) Retirement Plan with employer match
  • Medical, dental, and vision insurance eligibility (employee + dependents)
  • Company-paid:
  • Basic life insurance
  • Accidental death & disability coverage
  • Short- and long-term disability
  • Access to HSA and FSA programs (healthcare, dependent care, commuting)
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: RESLBFEED
  • Position Id: 27099_27278_fbcf43af7fb496796717127ec5a7d44c
  • Posted 2 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Atlanta, Georgia

Today

Full-time

USD 126,820.00 - 149,200.00 per year

Remote

26d ago

Easy Apply

Full-time

100,000 - 125,000

Knoxville, Tennessee

Today

Full-time

USD 204,850.00 - 265,100.00 per year

Murfreesboro, Tennessee

Today

Full-time

USD 204,850.00 - 265,100.00 per year

Search all similar jobs