Minimum of 6 years of experience in administering Linux/UNIX systems, Database, managing storage, and other back-end services in bare metal or virtualized environments
- 2+ years of experience with managing and monitoring large Hadoop clusters
- 2+ years of experience with writing software/shell scripts using scripting languages including Perl, Python, or Ruby.
- 2+ years of experience with Hadoop Distributed File System (HDFS)
- Hands on experience with Apache Hadoop Ecosystem components such as Pig, Hive, Scoop, Flume, and MapReduce/YARN etc.
- Good understanding of the core database concepts
- Experience with distributed, scalable databases such as Cassandra, Mongo, HBASE
- Hands on experience with one or more ETL/ELT tools.
- Established experience with automated, elastic scaling of cloud services, automated deployment and remediation
- High level of ownership and accountability
- Any Pivotal Hadoop/GreenPlum/GemFire experience is a plus.
***Please send resumes to Shanthi at email@example.com