Data Engineer with Spark

  • Posted 6 hours ago | Updated 6 hours ago

Overview

Remote
Depends on Experience
Full Time

Skills

Spark
Scala
Hive

Job Details

Responsibilities: Design, develop, and maintain scalable data pipelines using Spark and Scala.

Implement ETL processes to transform and load data into Snowflake and other data warehouses.

Develop and optimize Hive queries for efficient data extraction and transformation.

Collaborate with data scientists and business analysts to ensure data quality and consistency.

Build and maintain data models and ETL processes to support business intelligence and reporting tools.

Monitor and troubleshoot data pipelines to ensure high availability and performance.

Implement Python scripts for data processing and automation tasks.

Stay up-to-date with the latest trends and technologies in data engineering and propose improvements. 5+ years of experience as a Data Engineer or similar role.

Proficiency in Spark, Scala, Hive, Python, and Snowflake. Strong understanding of ETL processes and data warehousing concepts.

Experience with SQL and NoSQL databases.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About CitiusTech