Sunnyvale, CaliforniaSkills :
hadoop developer,java,c++,No-SQL,Hive,HBase,Spark,Pig,prestro,oracle,Algorithms,Data Structures,Object Oriented Programming,Distributed Systems,c,C++Description :
- It takes powerful technology to connect our brands and partners with an audience of 1 billion. Nearly half of employees are building the code and platforms that help us achieve that. Whether you're looking to write mobile app code, engineer the servers behind our massive ad tech stacks, or develop algorithms to help us process 4 trillion data points a day, what you do here will have a huge impact on our business-and the world. Want in? As per our brands TechCrunch and HuffPost help people stay informed and entertained, communicate and transact, while creating new ways for advertisers and partners to connect. With technologies like XR, AI, machine-learning, and 5G, we're transforming media for tomorrow, too. We're creators and coders, dreamers and doers creating what's next in content, advertising and technology.
- We're looking for engineers to join the Big Data team in Sunnyvale CA. In this role, you'd be building some of the largest data lakes in the world! We manage 50K nodes running 1.5 million jobs per day, across thousands of users and tons of applications, processing close to an exabyte of data.
So when considering this position, please ask yourself, do you want to do something exceptional?
- The qualified engineer would work on developing large scale distributed SQL and No-SQL engines involving one or more of the following open source projects: Hive, Presto, Pig or Hbase. On this outstanding team, we're challenged by complex problems day in and day out, and in having each other to lean on, we find ourselves growing every day as a result.
- We're currently active in 6 Apache projects and our team consists of numerous committers and PMC members. In other words, not only will you make a massive impact here, but globally as well.
- We work like a startup, do phenomenal work, and have fun while we do it. Come join us and become an authority on big data and cloud computing!
Did I mention free food? Yeah, we have that too.
- Understand all aspects of our distributed systems and learn select components in detail
- Be a leader in open source (Apache Software Foundation) projects
- Design massively distributed technology and develop bleeding-edge cloud computing software
- Work closely with service engineering, operations, and Hadoop users to engineer bleeding-edge solutions that allow us to answer today's most meaningful data-related questions
- Develop brand new caching layer from scratch for Presto
- Build a brand new stats collection layer utilizing sketches for data compression for Hive, Presto, and Spark
- Work on state of the art rolling upgrade system for HBase
- Crafting scalable and performant OLTP layer for HBase that can compete with Oracle!
- MS degree or BS + 5 years of experience
- Strong work history and/or educational background
- Skilled in either Java or C++
- Apache NiFi Unix/Liniux Java Big Data is mandatory
- In-depth knowledge of Algorithms, Data Structures, and Performance Optimization Techniques
- A passion for working with large distributed data and systems
- An interest in working on database internals and query optimization