Senior Hadoop Engineer
We have a definitive vision of how electronic payment systems will look in the future and we have the knowledge, scale and resources to deliver it. We've developed a suite of world class Analytics and Fraud Prevention products and services based on very modern technologies such as containerization and Hadoop. As a Hadoop/Big Data Engineer, you too can help us drive payments at the speed of change.
Essentially this role will have two phases, namely:
- The immediate need is to have a strong Hadoop presence to help us to operationalize the clusters both for the dev and prod environments
- Longer term, the job will evolve into an engineering role with the data science organization and basically liaise between the Architects, Data Science team and the ops team as we define and deploy models and services for fraud and related analytics.
Essential Duties and Responsibilities:
- Manages and participates in the day to day operational work across the Hadoop clusters
- Works closely with the hosted ops colleagues to define operational best practice for the UAT and Prod Hadoop clusters.
- Participates in project planning and review as they pertain to Hadoop and Hadoop clusters.
- Development of software applications utilizing Hadoop and other big data technologies.
- Bachelor's degree in Computer Science or a related field and/or equivalent experience.
- Previous experience administering Hadoop clusters.
- 2 years Java development experience.
- 5 years of related work experience to include experience with systems testing and system requirements processes (planning, elicitation, analysis and management.)
- 2 or more years of experience working with Hadoop in a big data environment
Highly Desired Skills:
- Experience with the Hortonworks Hadoop Distribution (HDP).
- Experience with SPARK, HBase, Hive, NIFI, and Phoenix.
- Experience with Cassandra, SOLR and Kafka.
- Experience with agile software development methodologies (e.g. Scrum)
- Familiar with containerization technology (e.g. Docker, Kubernetes)
- Experience with machine learning using Hadoop or other frameworks.
- Experience working with Kerberized Hadoop clusters.
TCM is an EEO/Vets/Disabled Employer.