Data Engineer

Overview

On Site
Contract - W2
Contract - 12 month(s)

Skills

Python
SQL
Spark
Azure
CI/CD
Databricks
Spark/PySpark
Delta Live Tables

Job Details

Title: Data Engineer

Location: Cincinnati, OH (3 days' on-site required)


Duration: 1 Year Contract (potential for conversion/extension)

The team is seeking a Data Engineer experienced in implementing modern data solutions in Azure, with strong hands-on skills in Databricks, Spark, Python, and cloud-based DataOps practices. The Data Engineer will analyze, design, and develop data products, pipelines, and information architecture deliverables, focusing on data as an enterprise asset. This role also supports cloud infrastructure automation and CI/CD using Terraform, GitHub, and GitHub Actions to deliver scalable, reliable, and secure data solutions.

Key Responsibilities

Analyze, design, and develop enterprise data solutions with a focus on Azure, Databricks, Spark, Python, and SQL

Develop, optimize, and maintain Spark/PySpark data pipelines, including managing performance issues such as data skew, partitioning, caching, and shuffle optimization

Build and support Delta Lake tables and data models for analytical and operational use cases

Apply reusable design patterns, data standards, and architecture guidelines across the enterprise, including collaboration with end client when needed

Use Terraform to provision and manage cloud and Databricks resources, supporting Infrastructure as Code (IaC) practices

Implement and maintain CI/CD workflows using GitHub and GitHub Actions for source control, testing, and pipeline deployment

Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts

Troubleshoot failures and improve reliability across Databricks jobs, clusters, and data pipelines

Apply cloud computing skills to deploy fixes, upgrades, and enhancements in Azure environments

Work closely with engineering teams to enhance tools, systems, development processes, and data security

Participate in the development and communication of data strategy, standards, and roadmaps

Draft architectural diagrams, interface specifications, and other design documents

Promote the reuse of data assets and contribute to enterprise data catalog practices

Deliver timely and effective support and communication to stakeholders and end users

Mentor team members on data engineering principles, best practices, and emerging technologies

Requirements

7+ years of experience as a Data Engineer

Hands-on experience with Azure Databricks, Spark, and Python

Experience with Delta Live Tables (DLT) and Databricks SQL

Strong SQL and database background

Experience with Azure Functions, messaging services, or orchestration tools

Familiarity with data governance, lineage, or cataloging tools (e.g., Purview, Unity Catalog)

Experience monitoring and optimizing Databricks clusters or workflows

Experience working with Azure cloud data services and understanding how they integrate with Databricks and enterprise data platforms

Experience with Terraform for cloud infrastructure provisioning

Experience with GitHub and GitHub Actions for version control and CI/CD automation

Strong understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior)

Familiarity with SDLC and modern engineering practices

Ability to balance multiple priorities, work independently, and stay organized

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.