Overview
Skills
Job Details
About Photon
Photon, a global leader in AI and digital solutions, helps clients accelerate AI adoption and embrace Digital Hyper-expansion to make tomorrow happen today . We work with 40% of the Fortune 100, enabling them to stay agile and future-ready in an era of converging digital and AI boundaries. Powering billions of touchpoints a day, Photon combines AI management, digital innovation, product design thinking, and engineering excellence to drive lasting transformation for F500 clients. We employ several thousand people across dozens of countries. Learn more at
Website
About the Role
As part of the Mail Analytics Data Engineering team, you will be working on large-scale batch pipelines, data serving, data lakehouse, and analytics systems, enabling mission critical decision making, downstream, AI-powered capabilities, and more.
If you're passionate about building data infrastructure and platforms that power modern Data- and AI-driven business at scale, we want to hear from you!
Your Day
- Develop new or improve and maintain existing large-scale data infrastructures and systems for data processing or serving, optimizing complex code through advanced algorithmic concepts and in-depth understanding of underlying data system stacks
- Create and contribute to frameworks that improve theefficacy of the management and deployment of data platforms and systems, while working with data infrastructure to triage and resolve issues
- Prototype new metrics or data systems
- Define and manage Service Level Agreements for all data sets inallocatedareas of ownership
- Develop complex queries,very largevolume data pipelines, and analytics applications to solve analytics and data engineering problems
- Collaborate with engineers, data scientists, and product managers to understand business problems, technical requirements to deliver data solutions
- Engineering consulting on large and complex datalakehousedata
You Must Have
- BS in Computer Science/Engineering, relevant technical field, or equivalent practical experience, with specialization in Data Engineering
- 3-5 years of experience in Data Engineering (ETL, datalakehouse, Data Modeling)
- Strong fundamentals: algorithms, distributed computing, data structure, database, data warehouse
- Fluency with: Python/Java/SQL
- Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multitask and manage expectations
Preferred
- MS/PhD in Computer Science/Engineering or relevant technical field, with specialization in Data Engineering
- 2+years experiencein Hadoop/Apache technologies (Pig, Hive, HBase, Storm, Spark, Kafka, Oozie)
- 2+years experiencein Google Cloud Platform technologies (BiqQuery, Dataproc, Dataflow, Composer, Looker)