Overview
Skills
Job Details
Responsibilities: Design, develop, and maintain scalable data pipelines using Spark and Scala.
Implement ETL processes to transform and load data into Snowflake and other data warehouses.
Develop and optimize Hive queries for efficient data extraction and transformation.
Collaborate with data scientists and business analysts to ensure data quality and consistency.
Build and maintain data models and ETL processes to support business intelligence and reporting tools.
Monitor and troubleshoot data pipelines to ensure high availability and performance.
Implement Python scripts for data processing and automation tasks.
Stay up-to-date with the latest trends and technologies in data engineering and propose improvements. 5+ years of experience as a Data Engineer or similar role.
Proficiency in Spark, Scala, Hive, Python, and Snowflake. Strong understanding of ETL processes and data warehousing concepts.
Experience with SQL and NoSQL databases.