5 - 15 years experience in software development primarily in the Big Data domain using Spark & Hadoop ecosystem.
5 - 15 Years of experience in Spark & Scala/Java (Skill Level: 8 or more out of 10)
Very strong in multi-threading, & object-oriented design
Masters degree in Computer Science or equivalent.
4+ years experience designing, building and launching extremely efficient & reliable data pipelines to move data (both large and small amounts) using modern data architecture/tools.
Strong background (5+ years) in ETL processing for large amount of data
3+ years experience working in Agile environment (SCRUM / KANBAN)
Experience working with AWS (S3, Glue, EMR, Lambda, EC2, RDS)