Data Engineer

Overview

On Site
USD 64.00 - 68.00 per hour
Full Time

Skills

Innovation
Data Architecture
FOCUS
Scalability
Clarity
Legacy Systems
Big Data
Cost-benefit Analysis
Migration
Technical Writing
Apache Spark
Computer Science
Software Engineering
Data Engineering
Data Modeling
SQL
NoSQL
Amazon Web Services
Data Integration
Communication
Collaboration
Microsoft Azure
Data Lake
Databricks
Microsoft SSAS
Microsoft Power BI
DAX
Data Flow
Streaming
Apache Kafka
IBM WebSphere MQ
Python
PySpark
Cloud Computing
Enterprise Resource Planning
Analytics
Privacy
Marketing

Job Details

Location: Cincinnati, OH
Salary: $64.00 USD Hourly - $68.00 USD Hourly
Description:
Job Overview

We're seeking a data-driven technologist to design and deliver scalable data architecture solutions that support strategic business outcomes. This role involves developing enterprise-wide data strategies, modernizing legacy systems, and enabling seamless integration and analytics across platforms. You'll collaborate with cross-functional teams and external partners to treat data as a strategic asset and drive innovation.

Key Responsibilities
  • Design and implement enterprise data architecture with a focus on scalability, reusability, and performance.
  • Collaborate across teams to align on data initiatives, resolve blockers, and ensure project clarity.
  • Modernize legacy systems using cloud-native, SQL/NoSQL, and big data technologies.
  • Define and execute migration strategies to transition from current to future-state architectures.
  • Conduct cost-benefit analyses to support data-driven architectural decisions.
  • Identify and resolve architectural inefficiencies in existing systems.
  • Promote reuse of data assets and maintain a centralized data catalog.
  • Evaluate and implement tools for data ingestion, transformation, and migration-especially for ERP systems.
  • Create architectural diagrams, interface specifications, and technical documentation.
  • Build robust data pipelines and models using tools like Databricks, Azure, AWS, and Spark.


Minimum Qualifications
  • Bachelor's degree in computer science, Software Engineering, or a related field.
  • 4+ years of experience delivering complex, scalable data solutions.
  • Strong understanding of data engineering principles, including data modeling and pipeline development.
  • Hands-on experience with SQL, NoSQL, cloud platforms (Azure/AWS), and data integration tools.
  • Excellent communication and collaboration skills.


Preferred Qualifications
  • Experience with Azure Data Platform (Data Lake, Data Factory, Databricks, Synapse).
  • Familiarity with SSAS Tabular, Power BI, DAX, and Dataflows.
  • Knowledge of streaming platforms like Kafka, EventHub, or IBM MQ.
  • Proficiency in Python, PySpark, and cloud-native development.
  • Experience with ERP systems and analytics platforms.


By providing your phone number, you consent to: (1) receive automated text messages and calls from the Judge Group, Inc. and its affiliates (collectively "Judge") to such phone number regarding job opportunities, your job application, and for other related purposes. Message & data rates apply and message frequency may vary. Consistent with Judge's Privacy Policy, information obtained from your consent will not be shared with third parties for marketing/promotional purposes. Reply STOP to opt out of receiving telephone calls and text messages from Judge and HELP for help.

Contact:

This job and many more are available through The Judge Group. Please apply with us today!
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Judge Group, Inc.