Data Engineer-W2 only

Overview

Remote
Depends on Experience
Contract - W2
Contract - Independent

Skills

Databricks
Snowflake
BigQuery
Redshift
Synapse
Data Lake / Lakehouse architectures
AWS
Azure
or GCP).Work

Job Details

Job Title: Data Engineer

Location: Remote / Onsite (Flexible)
Employment Type: W2/C2C / C2H

About the Role

We are looking for an experienced Data Engineer who can design, build, and optimize large-scale data pipelines and analytics platforms. The ideal candidate has strong hands-on skills in ETL/ELT development, cloud data platforms, big data processing, and modern data engineering frameworks.


Key Responsibilities

  • Design and build scalable ETL/ELT pipelines for batch and streaming workloads.

  • Develop data processing workflows using Python, SQL, Spark, or PySpark.

  • Implement data solutions on cloud platforms (AWS, Azure, or Google Cloud Platform).

  • Work with Databricks, Snowflake, BigQuery, Redshift, Synapse, or similar systems.

  • Develop and maintain Data Lake / Lakehouse architectures.

  • Optimize pipeline performance, reliability, and cost efficiency.

  • Collaborate with Data Analysts, BI teams, and Data Scientists to deliver high-quality datasets.

  • Ensure data governance, quality, and security across all platforms.

  • Build automated CI/CD workflows for data infrastructure and pipelines.

  • Monitor, troubleshoot, and enhance existing data systems and processes.


Required Skills & Experience

  • Strong programming skills in Python and advanced SQL.

  • Hands-on experience with Apache Spark / PySpark.

  • Experience building pipelines on Databricks, Snowflake, Azure Data Factory, AWS Glue, or Google Cloud Platform Dataflow.

  • Solid understanding of data warehousing, dimensional modeling, and relational databases.

  • Experience working with cloud storage systems (S3, ADLS, GCS).

  • Hands-on experience with Airflow, DBT, or other orchestration tools.

  • Knowledge of CI/CD, Git, and DevOps practices for data engineering.

  • Understanding of data governance, lineage, quality, and security best practices.


Preferred Qualifications

  • Experience with streaming frameworks like Kafka, Kinesis, Pub/Sub.

  • Hands-on experience with Lakehouse architecture (Delta Lake).

  • Familiarity with ML workflows, feature engineering, and MLOps.

  • Cloud certifications in AWS, Azure, or Google Cloud Platform.

  • Experience optimizing Spark jobs and tuning big-data workloads.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.