Lead Azure Data Engineer - Raleigh, NC, Onsite. Must have Skills Python, Azure, Databricks, Azure Data Factory, Azure SQL, Azure Synapse Analytics, ETL / ELT, Apache Kafka, CI/CD pipelines, Azure Databricks.

  • Raleigh, NC
  • Posted 1 day ago | Updated 1 day ago

Overview

On Site
Depends on Experience
Full Time
Accepts corp to corp applications

Skills

API
Access Control
Amazon Web Services
Analytics
Apache Kafka
Apache Spark
Auditing
Clarity
Continuous Delivery
Continuous Integration
Cosmos-Db
Customer Relationship Management (CRM)
Data Governance
Data Lake
Data Modeling
Data Security
Databricks
ELT
Encryption
Extract
Transform
Load
Finance
IaaS
Meta-data Management
Microsoft Azure
NoSQL
Python
Regulatory Compliance
SQL
SQL Azure
Storage
Streaming
Workflow
Writing

Job Details

Lead Azure Data Engineer

Main Skill: Data Software Engineering Skill Spec: DSE Python Azure Databricks

Location: Raleigh, NC Work Mode: Full-time, onsite, 5 days a week
# About the Role

We re looking for a senior-level Azure Data Engineer to join a high-impact data and CRM platform team supporting a leading financial institution. This role is hands-on and onsite in Raleigh, NC, and is ideal for someone who enjoys building scalable data solutions, working with modern Azure services, and collaborating closely with business and technical teams.

If you enjoy solving complex data problems, designing reliable pipelines, and working in an environment that values clarity, ownership, and quality, this role will suit you well.

# What You ll Do

* Design, build, and maintain scalable data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics
* Develop ETL/ELT solutions for batch and streaming data ingestion
* Work with large datasets, optimizing storage and performance across Azure Data Lake, Azure SQL, and Azure Cosmos DB
* Implement API-based and streaming ingestion pipelines (low-latency processing)
* Monitor, troubleshoot, and optimize data workflows to ensure availability and performance
* Apply data security best practices, including access control, encryption, and auditing
* Automate pipelines and workflows using CI/CD and infrastructure-as-code principles
* Partner with engineering, analytics, and business teams to support data-driven decisions
* Document data architectures, pipelines, and processes to meet compliance and governance standards
* Support data governance efforts such as metadata management, lineage, and cataloging

### Must-Have Skills

* Strong experience with Apache Spark * Hands-on expertise with:

* Azure Data Factory
* Azure SQL
* Azure Synapse Analytics
* Solid background in ETL / ELT design and implementation
* Advanced SQL skills, including writing and optimizing complex queries
* Experience with data modeling and large-scale data environments

### Nice-to-Have Skills

* Apache Kafka
* CI/CD pipelines
* Python
* Experience with Azure Databricks
* Familiarity with NoSQL solutions (Azure Cosmos DB)
* Exposure to both Azure and AWS cloud infrastructure

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Keylent