Overview
Hybrid
Depends on Experience
Contract - W2
Contract - 12 Month(s)
50% Travel
Skills
Data Engineering
Data Flow
Data Lake
Data Modeling
Data Processing
ELT
Apache Spark
Apache Kafka
Azure
RBAC
Python
SQL
PySpark
Snow Flake Schema
NoSQL
Cosmos-Db
Job Details
Job Title: Azure Data Engineer
Location: Cincinnati, OH (Hybrid, Local Candidates Only)
Employment Type: W2 Only
Duration: 12+ Months
Job Summary
We are looking for an Azure Data Engineer to design, build, and optimize scalable data pipelines and analytics solutions on Microsoft Azure. The ideal candidate will have hands-on experience with Azure Data Factory, Databricks, Synapse, and SQL/NoSQL databases, along with expertise in ETL/ELT processes, data modeling, and performance tuning. You will collaborate with data scientists, analysts, and business teams to enable data-driven decision-making.
Key Responsibilities
- Design and implement scalable ETL/ELT pipelines using Azure Data Factory, Databricks, and Synapse Analytics.
- Develop Delta Lake/Parquet-based data lakes with optimized partitioning and indexing.
- Automate data workflows using Azure Functions, Logic Apps, or Event Grid.
- Optimize Spark jobs (PySpark/SQL) for performance and cost efficiency.
- Implement Azure SQL DB, Cosmos DB, or Synapse SQL Pools for structured/unstructured data.
- Configure Azure Blob Storage, ADLS Gen2, and Data Lake for raw/curated zones.
- Apply slowly changing dimensions (SCD), CDC, and data masking techniques.
- Set up real-time streaming (Kafka, Event Hubs) and batch processing.
- Enforce data security, encryption (TDE, Always Encrypted), and RBAC policies.
- Implement data lineage, cataloging (Purview), and monitoring (Log Analytics).
- Work with Power BI teams to optimize semantic models and datasets.
- Document data flows, schemas, and architecture diagrams.
- Stay updated with Azure updates (Fabric, OneLake, AI integrations).
Required Skills & Qualifications
- 10+ years of data engineering experience, with focused on Azure.
- Expertise in Azure Data Factory, Databricks, and Synapse.
- Strong SQL/Python/PySpark skills for data transformation.
- Experience with Delta Lake, Parquet, and medallion architecture.
- Knowledge of data modeling (star schema, data vault).
- Understanding of DevOps (CI/CD, IaC with Terraform/Bicep).
- Experience with Azure Purview, Stream Analytics, or Cosmos DB.
- Knowledge of dbt, Snowflake, or Power BI Premium.
- Exposure to AI/ML pipelines (Azure ML, Cognitive Services).
- Azure Data Engineer (DP-203) certification preferred.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.