Overview
Skills
Job Details
Key Responsibilities:
Design and build scalable big data applications using open-source technologies like Spark, Hive, Kafka
Develop data pipelines and orchestrate workflows using Apache Airflow
Implement and optimize ETL/ELT pipelines in Google Cloud Platform (Dataproc, GCS, BigQuery)
Model and design schemas for data lakes and RDBMS platforms
Automate data workflows and manage multi-TB/PB scale datasets
Provide ongoing support, maintenance, and participate in on-call rotations
Collaborate with cross-functional teams to deliver clean, reliable data products
Required Skills:
5+ years of experience with Hadoop, Spark, Hive, Airflow, or equivalent big data tools
Proficiency in Scala, Python, and scripting languages like Shell or Perl
Strong experience with data modeling (logical and physical)
Hands-on Google Cloud Platform experience (Dataproc, GCS, BigQuery)
Knowledge of distributed systems, test-driven development, and automated testing frameworks.