Overview
Hybrid
$30 - $40
Contract - W2
Skills
Hadoop
Hive
Spark
SQL
Python
version control (Git) and CI/CD tools
modern data governance and observability practices
Job Details
Data Engineer
Location: Pittsburgh, Cleveland, or Dallas (Hybrid: 3 days onsite)
Type: Contract-to-Hire
Experience: 4 6 years
Key Responsibilities:
- Design and implement scalable data pipelines using Hadoop, Spark, and Hive
- Build and maintain ETL/ELT frameworks for batch and streaming data
- Collaborate with product teams to ingest, transform, and serve model-ready datasets
- Optimize data workflows for performance and reliability
- Ensure pipeline quality through validation, logging, and exception handling
Preferred Skills:
- Hadoop, Hive, Spark, SQL, Python
- Experience with version control (Git) and CI/CD tools
- Familiarity with modern data governance and observability practices
- Cloud experience a plus (AWS, Azure, Google Cloud Platform)
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.