Direct Client:: W2 Position:: Need Hadoop & Python ETL Developer (Spark + Cloud + Snowflake ELT) :: Jacksonville, FL (Hybrid)

Hybrid in Jacksonville, FL, US • Posted 7 hours ago • Updated 7 hours ago
Contract W2
No Travel Required
Hybrid
Depends on Experience
Fitment

Dice Job Match Score™

⏳ Almost there, hang tight...

Job Details

Skills

  • Apache Spark
  • Apache Hadoop
  • Apache Hive
  • Data Warehouse
  • ELT
  • Extract, Transform, Load
  • HDFS
  • Python
  • Snow Flake Schema
  • Cloud Computing
  • Microsoft Azure

Summary

W2 contract to Fulltime position

===

Job Title: Hadoop & Python ETL Developer (Spark + Cloud + Snowflake ELT)

Location: Jacksonville, FL (Hybrid)

 

Job Description:

We are seeking a highly skilled Hadoop & Python ETL Developer to design, build, and optimize scalable data pipelines supporting big data processing and cloud-based analytics platforms. The role focuses on leveraging Apache Spark, Python, and Snowflake ELT to enable efficient data ingestion, transformation, and delivery across enterprise systems. The ideal candidate will have strong experience in distributed data processing, cloud ecosystems, and modern data warehousing practices.

 

Roles & Responsibilities:

  • Design and develop scalable ETL/ELT pipelines using Python and Apache Spark.
  • Build and maintain big data solutions using Hadoop ecosystem tools (HDFS, Hive, YARN).
  • Develop data ingestion frameworks for batch and near real-time processing.
  • Implement ELT processes in Snowflake, including data modeling, transformations, and performance tuning.
  • Integrate data from multiple sources (APIs, databases, flat files, streaming platforms).
  • Optimize Spark jobs for performance, scalability, and cost efficiency.
  • Work with cloud platforms (AWS / Azure / Google Cloud Platform) for data storage, processing, and orchestration.
  • Develop reusable data processing components and frameworks.
  • Ensure data quality, validation, and governance across pipelines.
  • Collaborate with data architects, analysts, and business stakeholders to understand requirements.
  • Monitor and troubleshoot data pipelines and production issues.
  • Implement CI/CD pipelines for data engineering workflows.
  • Support migration from legacy systems to cloud-based data platforms.

 

Required Skills:

  • Strong programming experience in Python for ETL development.
  • Hands-on experience with Apache Spark (PySpark / Spark SQL).
  • Solid understanding of Hadoop ecosystem (HDFS, Hive, MapReduce concepts).
  • Experience with Snowflake (ELT, SnowSQL, Streams, Tasks, performance tuning).
  • Expertise in SQL and data modeling (dimensional modeling, star/snowflake schema).
  • Experience with cloud platforms (AWS / Azure / Google Cloud Platform) and related data services.
  • Familiarity with orchestration tools (Airflow, Control-M, or similar).
  • Knowledge of data formats: JSON, Parquet, Avro, ORC.
  • Experience with version control systems (Git) and CI/CD pipelines.
  • Understanding of distributed computing and parallel processing.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.
  • Dice Id: 90735353
  • Position Id: 8923355
  • Posted 7 hours ago
Create job alert
Set job alertNever miss an opportunity! Create an alert based on the job you applied for.

Similar Jobs

Hybrid in Jacksonville, Florida

Today

Easy Apply

Contract

Depends on Experience

Remote

9d ago

Contract

60 - 70

Remote or Illinois

Today

Full-time

USD 165,000.00 - 216,562.00 per year

Remote

Today

Easy Apply

Contract

Depends on Experience

Search all similar jobs