Lead Platform Engineer (Hadoop)

Database, Disaster Recovery, Foundation, Hadoop, Management, Security, Solr
Full Time
Telecommuting not available Travel not required

Job Description

Discover. A more rewarding way to work.



At Employer, you'll find yourself in the company of some of the industry's smartest and most reliable professionals. And at a company that rewards dedication, values innovation and supports growth.



Thrive in an environment that promotes teamwork and shared success. Build on a foundation of mutual respect. Join the company that understands rewarding careers like no other, with this exceptional opportunity:



Job Description:



We are seeking bright, talented and driven engineers to join a team of passionate and innovative technologists. In this role, you will experience hands on engineering and administration experience on Discovers next generation platforms supporting the most critical payments applications for all of Discover network brands.






Contributing member of a high performing engineering and administration team over Critical Hadoop Application clusters.



Provide technical expertise to design efficient engineering solutions for next generation platforms which include the following technologies: (Kafka, Storm, Spark, Solr, Zookeeper, NiFi, HBase, HDFS, HIVE, YARN, Ranger, Knox, Ambari and Kerberos)



Big Data Cluster platform provisioning-administration



Big Data Cluster Resiliency and Performance engineering and administration



Big Data Cluster Security Implementations



Big Data engineering and administration for high availability, replications and disaster recovery solutions



Big Data database engineering and administration



Leverage DevOps techniques and practices to include Continuous Integration, Continuous Deployment test build automation working with key application architects and application developers



Promote a risk-aware culture, ensure efficient and effective risk and compliance management practices by adhering to required standards and processes.



Level 2/3 "go to" team for operational support






Bachelor's Degree (preferably in Information Technology) or the equivalent work experience



5+ years working within Infrastructure Technology



4+ years' experience working with Hadoop Cluster Engineering and Administration



A level of hands-on experience working with Kafka, Storm, Spark, Solr, Zookeeper, NiFi, HBase, HDFS, HIVE, YARN, Ranger, Knox, Ambari and Kerberos



Experience leveraging DevOps techniques






Department: Technology
Dice Id : 10120548
Position Id : 3625_22783825
Have a Job? Post it