Location: Cincinnati, OH
Salary: $65.00 USD Hourly - $70.00 USD Hourly
Description: About the Role We are looking for a Data Engineer with strong experience building and operating modern data platforms on Microsoft Azure. In this role, you will design, develop, and optimize scalable data solutions using Databricks, Spark, Python, and SQL, while applying cloud-native DataOps and CI/CD practices.
You will work on enterprise data products and pipelines that treat data as a strategic asset, supporting analytics, reporting, and operational use cases across the organization.
Responsibilities - Design, build, and maintain enterprise-scale data solutions using Azure, Databricks, Spark, Python, and SQL
- Develop and optimize Spark/PySpark pipelines, addressing performance challenges such as data skew, partitioning, caching, and shuffle optimization
- Build and manage Delta Lake tables and data models for analytical and operational workloads
- Apply reusable data architecture patterns, standards, and best practices across teams
- Provision and manage Azure and Databricks resources using Terraform and Infrastructure as Code (IaC) principles
- Implement and maintain CI/CD pipelines using GitHub and GitHub Actions for testing, deployment, and automation
- Manage Git-based workflows for Databricks notebooks, jobs, and data engineering artifacts
- Monitor, troubleshoot, and improve reliability of Databricks clusters, workflows, and jobs
- Deploy fixes, enhancements, and upgrades in Azure cloud environments
- Collaborate with engineering and platform teams to improve tools, development processes, and data security
- Contribute to data strategy, standards, and technical roadmaps
- Create and maintain architecture diagrams, interface specifications, and design documentation
- Promote reuse of data assets and support enterprise data catalog and governance practices
- Provide clear communication and timely support to stakeholders and end users
- Mentor team members on data engineering best practices and emerging technologies
Minimum Qualifications - 5+ years of experience as a Data Engineer
- Hands-on experience with Azure Databricks, Apache Spark, and Python
- Strong SQL skills and solid database fundamentals
- Experience with Delta Live Tables (DLT) or Databricks SQL
- Experience working with Azure cloud data services and integrating them with Databricks
- Understanding of distributed computing concepts (partitions, joins, shuffles, cluster behavior)
- Familiarity with SDLC and modern engineering practices
- Ability to work independently, manage multiple priorities, and stay organized
Preferred Qualifications - Experience with Azure Functions, messaging services, or orchestration tools
- Familiarity with data governance, lineage, and cataloging tools (e.g., Unity Catalog, Purview or similar)
- Experience monitoring and optimizing Databricks clusters and workflows
- Hands-on experience with Terraform for cloud infrastructure provisioning
- Experience using GitHub and GitHub Actions for version control and CI/CD automation
- Exposure to enterprise data platforms and large-scale data ecosystems
By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.
Contact: This job and many more are available through The Judge Group. Please apply with us today!