Overview
Skills
Job Details
JD:
Develop and optimize data pipelines using Java Spark and Photon for efficient data extraction, transformation, and loading (ETL) processes.
Experience with backend Delta lake, PL/SQL
Write clean, efficient, and well-documented code to automate data workflows and integrate with various data sources.
5+ years of experience in data engineering with strong proficiency in SQL and Java Spark.
Hands-on experience with relational databases (e.g., Oracle, AWS Aurora) and NoSQL databases (e.g., Cassandra).
Experience with scheduling and monitoring tools Grafana, Control-M
Familiarity with ETL tools, data pipeline frameworks (e.g., Apache Airflow), and cloud platforms (AWS) and AWS Glue
Knowledge of data modeling, schema design, and performance optimization techniques.
Experience with monitoring tools splunk and Dynatrace
Experience with CICD pipeline like jules and trueCD