Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - W2
Skills
YAML
EV2
Azure
Data Lake
Storage
Databricks
Synapse
ADF
Job Details
Job Summary
- We are seeking a highly skilled Senior Data Engineer with 12+ years of experience to design, build, and optimize large-scale data processing solutions.
- The ideal candidate must have strong hands-on expertise in Apache Spark with Scala, Azure data engineering services, and EV2/YAML pipelines, along with recent experience working on Microsoft projects or environments.
Key Responsibilities
- Design, develop, and optimize distributed data processing pipelines using Apache Spark and Scala.
- Build and automate YAML-based EV2 deployment pipelines.
- Develop and manage Azure Data Factory (ADF) pipelines, data flows, and orchestrations.
- Work with Azure Fabric Semantic Models for data modeling and analytics solutions.
- Integrate and manage various Azure services including Storage, Key Vault, Data Lake, Databricks, Synapse, etc.
- Write and optimize complex SQL queries for data transformation, validation, and reporting.
- Implement best practices for data quality, performance tuning, and scalable architecture.
- Collaborate with cross-functional teams Azure engineers, analysts, and business teams using agile methodologies.
- Ensure CI/CD automation, versioning, and secure deployments following Microsoft standards.
Required Skills (Mandate)
- Strong proficiency in Apache Spark with hands-on experience in Scala
- YAML & EV2 Pipelines
- Azure Fabric Semantic Models
- Azure Data Engineering Services (Data Lake, Storage, Databricks, Synapse, Key Vault, etc.)
- Azure Data Factory (ADF)
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.