Data Engineer Scala, Spark, Python, Google Cloud Platform

Overview

On Site
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)
No Travel Required

Skills

Data Engineer
Hadoop
Spark
Hive
Airflow
Scala
Python

Job Details

We are seeking a highly skilled Data Engineer with strong expertise in Scala, Spark, Python, and Google Cloud Platform to design and build scalable big data solutions. The ideal candidate will have hands-on experience with modern data platforms, workflow orchestration, and large-scale distributed processing.

Key Responsibilities:

  • Design and build scalable big data applications using open-source technologies like Spark, Hive, Kafka

  • Develop data pipelines and orchestrate workflows using Apache Airflow

  • Implement and optimize ETL/ELT pipelines in Google Cloud Platform (Dataproc, GCS, BigQuery)

  • Model and design schemas for data lakes and RDBMS platforms

  • Automate data workflows and manage multi-TB/PB scale datasets

  • Provide ongoing support, maintenance, and participate in on-call rotations

  • Collaborate with cross-functional teams to deliver clean, reliable data products


Required Skills:

  • 5+ years of experience with Hadoop, Spark, Hive, Airflow, or equivalent big data tools

  • Proficiency in Scala, Python, and scripting languages like Shell or Perl

  • Strong experience with data modeling (logical and physical)

  • Hands-on Google Cloud Platform experience (Dataproc, GCS, BigQuery)

  • Knowledge of distributed systems, test-driven development, and automated testing frameworks

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.