Mid-level Data Engineer (W2 Contract)

Overview

Remote
$80,000 - $100,000
Contract - Independent
Contract - W2
Contract - 12 Month(s)
100% Travel

Skills

Extract
Transform
Load
Agile
Amazon Redshift
Amazon S3
Amazon Web Services
Analytics
Apache Kafka
Apache Spark
Cloud Computing
Cloud Storage
Computer Science
Confluence
Continuous Delivery
Continuous Integration
Data Engineering
Data Integrity
Data Lake
Data Modeling
Data Warehouse
Database
ELT
Git
Good Clinical Practice
Google Cloud Platform
JIRA
Machine Learning (ML)
Microsoft Azure
Orchestration
Pandas
PySpark
Python
Real-time
Reporting
SQL
Snow Flake Schema
Sprint
Streaming
Unstructured Data
Version Control
Workflow
Writing

Job Details

Job Title: Mid-level Data Engineer (W2 Contract)

Location: [Onsite / Remote / Hybrid] United States
Duration: (W2 only )
Work Authorization: Must be authorized to work in the U.S. without sponsorship


Job Summary:

We are seeking a Mid-level Data Engineer to support data pipeline development, transformation, and integration initiatives within our growing data ecosystem. You will work closely with data architects, analysts, and business stakeholders to build and optimize scalable, reliable data solutions on modern cloud platforms.


Responsibilities:

  • Design, build, and maintain robust data pipelines and ETL/ELT workflows using tools such as Apache Spark, PySpark, or SQL-based frameworks

  • Ingest and process structured and unstructured data from various sources (APIs, databases, files, cloud storage)

  • Develop and optimize queries, data transformations, and aggregations for analytics use cases

  • Ensure data integrity, quality, and consistency across multiple environments

  • Collaborate with data analysts, data scientists, and application developers to support reporting, ML models, and APIs

  • Monitor pipeline performance, troubleshoot failures, and implement logging and alerting

  • Work within Agile teams and contribute to sprint planning, story writing, and code reviews


Required Skills:

  • 3 5 years of experience in data engineering or ETL development

  • Proficiency in Python, SQL, and data transformation frameworks (e.g., PySpark, Pandas)

  • Experience with cloud platforms such as AWS, Google Cloud Platform, or Azure (e.g., S3, BigQuery, Redshift, Azure Data Lake)

  • Familiarity with data warehousing concepts and tools (e.g., Snowflake, BigQuery, Redshift)

  • Experience with workflow orchestration tools like Airflow, Prefect, or Luigi

  • Knowledge of data modeling, schema design, and performance tuning

  • Solid understanding of version control systems (Git) and CI/CD practices


Preferred Qualifications:

  • Bachelor s degree in Computer Science, Engineering, or related field

  • Familiarity with Kafka, Pub/Sub, or real-time data streaming technologies

  • Experience working in Agile environments with tools like JIRA, Confluence.


Contract Details:

  • W2 Only No C2C or third-party submissions.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.