Overview
Skills
Job Details
Our team is looking for a Senior Data Engineer. Data is essential for all our decision-making needs whether it's related to product design, measuring advertising effectiveness, helping users discover new content or building new
businesses in emerging markets. This data is deeply valuable and gives us insights into how we can continue improving our service for our users, advertisers and our content partners. Our Audience team is seeking a highly motivated Data Engineer with a strong technical background and passionate about diving deeper into Big Data to develop state of the art Data Solutions.
Responsibilities:
? Contribute to the design and growth of our Data Products and Data Warehouses around Engagement and
Retention Analytics and Data Science
? Design and develop scalable data warehousing solutions, building ETL pipelines in Big Data environments
(cloud, on-prem, hybrid)
? Our tech stack includes Hadoop, AWS, Snowflake, Spark and Airflow and languages include Python, Scala
? Help architect data solutions/frameworks and define data models for the underlying data warehouse and
data marts
? Collaborate with Data Product Managers, Data Architects and Data Engineers to design, implement, and
deliver successful data solutions
? Maintain detailed documentation of your work and changes to support data quality and data governance
? Ensure high operational efficiency and quality of your solutions to meet SLAs and support commitment to
our customers (Data Science, Data Analytics teams)
? Be an active participant and advocate of agile/scrum practice to ensure health and process improvements
for your team
? Contribute to the design and growth of our Data Products and Data Warehouses around Engagement and
Retention Analytics and Data Science
? Design and develop scalable data warehousing solutions, building ETL pipelines in Big Data environments
(cloud, on-prem, hybrid)
? Our tech stack includes Hadoop, AWS, Snowflake, Spark and Airflow and languages include Python, Scala
? Help architect data solutions/frameworks and define data models for the underlying data warehouse and
data marts
? Collaborate with Data Product Managers, Data Architects and Data Engineers to design, implement, and
deliver successful data solutions
? Maintain detailed documentation of your work and changes to support data quality and data governance
? Ensure high operational efficiency and quality of your solutions to meet SLAs and support commitment to
our customers (Data Science, Data Analytics teams)
? Be an active participant and advocate of agile/scrum practice to ensure health and process improvements
for your team
Required Skills:
? 6+ years of data engineering experience developing large data pipelines
? Strong SQL skills and ability to create queries to extract data and build performant datasets
? Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to
query and process data
? Strong programming skills in Python/Scala/Java
? Experience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query)
? Solid experience with data integration toolsets (i.e Airflow) and writing and maintaining Data Pipelines
? Strong in Data Modeling techniques and Data Warehousing standard methodologies and practices
? Familiar with Scrum and Agile methodologies
? You are a problem solver with strong attention to detail and excellent analytical and communication skills
? Nice to have experience with Cloud technologies like AWS (S3, EMR, EC2)
Education Required: Bachelor's or Master's Degree in Computer Science, Information Systems or related field
About Korn Ferry
Korn Ferry unleashes potential in people, teams, and organizations. We work with our clients to design optimal organization structures, roles, and responsibilities. We help them hire the right people and advise them on how to reward and motivate their workforce while developing professionals as they navigate and advance their careers. To learn more, please visit Korn Ferry at ;/span>