Sr. Hadoop Admin
Client: Direct Client
Location: San Ramon, CA
Duration: 12 months Contract
Rate: Open/Depend upon expereince
Role / Summary / Purpose
The person in this position will be responsible for the build out, day-to-day management and support of the Big Data clusters based on Hadoop and other NoSQL technologies. The Candidate should have deep expertise in system & data administration functions for a complex; multi-system and multi-platform networks and UNIX or Linux based Platform-as-Service (PaaS) and Infrastructure-as-Service (IaaS) offerings. The candidate must able to leverage experience with diagnosing network performance short
In this role, you will:
Work Collaboratively with different teams to manage the build out and support including design, capacity planning, cluster set up, performance tuning and monitoring
Experience administering Hadoop eco-system such as HDFS, MapReduce, HBase, Pig, Hadoop streaming, Sqoop, Spark/Shark and Hive
Installing, administering, and supporting Linux operating systems and hardware in an enterprise environment
Contribute in typical system administration and programming skills such as storage capacity management, performance tuning
Setup, configuration and management of security for hadoop clusters.
Automate cluster node provisioning and repetitive tasks and manage Hadoop stack support run book.
Responsible for cluster availability and available 24x7 on call support.
Support development and production deployments.
Administer, diagnose Databases, storage, and other back-end services in fully virtualized environments
Scalability and high availability, fault tolerance, and elasticity
Administer, troubleshoot and maintain ELT/ETL Processes.
Implement system wide monitoring, alerts, and automated recovery
Participate in an Agile SDLC with various development teams.
Champion best practices for Linux administration and Security for delivery of cloud services
Bachelor's Or Masters Degree in Computer Science, Information Systems OR STEM disciplines with a minimum of 6 years of experience in programming and system administration experience)
Minimum of 6 years of experience in administering Linux/UNIX systems, Database, managing storage, and other back-end services in bare metal or virtualized environments
Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job opening
Ability and willingness to work out of an office located in San Ramon, CA
Ability and willingness to travel, as required
3+ years of experience with managing and monitoring large Hadoop clusters
3+ years of experience with writing software/shell scripts using scripting languages including Perl, Python, or Ruby.
3+ years of experience with Hadoop Distributed File System (HDFS)
Hands on experience with Apache Hadoop Ecosystem components such as Pig, Hive, Scoop, Flume, and MapReduce/YARN etc.
Good understanding of the core database concepts
Experience with distributed, scalable databases such as Cassandra, Mongo, HBASE
Hands on experience with one or more ETL/ELT tools.
Established experience with automated, elastic scaling of cloud services, automated deployment and remediation
High level of ownership and accountability
Any Pivotal Hadoop/GreenPlum/GemFire experience is a plus
Charudatta Pawar | Xoriant Connect
+1-408-550-1268 | E-Mail: email@example.com
Add: 1248, Reamwood Ave, Sunnyvale, CA 94089
Find us on:| |
Please consider the environment before printing this e-mail.
Best way to reach me is through email.