Need Hadoop Admin - San Jose, CA

  • San Jose, CA
  • Posted 5 hours ago | Updated 5 hours ago

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 26 Month(s)
Able to Provide Sponsorship

Skills

Hadoop
EMR&EKS
Prometheus
Grafana
Splunk
MapReduce
Hive
Pig
Spark
Kafka
HBase
Kerberos
Knox
SQL
NoSQL

Job Details

Role: Hadoop Administrator Location: San Jose, CA

Job Description: Responsibilities
Responsible for implementation and ongoing administration of Hadoop infrastructure.
Responsible for cluster maintenance, troubleshooting, monitoring, and following proper backup & recovery strategies.
Provisioning and managing the life cycle of multiple clusters like EMR & EKS.
Infrastructure monitoring, logging & alerting with Prometheus, Grafana, and Splunk.
Performance tuning of Hadoop clusters and Hadoop workloads, including capacity planning at the application/queue level.
Responsible for memory management, queue allocation, and distribution experience in Hadoop/Cloud era environments.
Should be able to scale clusters in production and have experience with 18/5 or 24/5 production environments.
Monitor Hadoop cluster connectivity and security, including file system (HDFS) management and monitoring.
Investigate and analyze new technical possibilities, tools, and techniques that reduce complexity, create more efficient and productive delivery processes, or deliver better technical solutions that increase business value.
Involved in fixing issues, performing RCA, and suggesting solutions for infrastructure/service components.
Responsible for meeting Service Level Agreement (SLA) targets and collaboratively ensuring team targets are met.
Ensure all changes to production systems are planned and approved in accordance with the Change Management process.
Collaborate with application teams to install operating system and Hadoop updates, patches, and version upgrades when required.
Maintain central dashboards for all system, data, utilization, and availability metrics.
Ideal Candidate Profile Experience: 612 years of total experience, with at least 3 years of hands-on work in developing, maintaining, optimizing, and resolving issues in Hadoop clusters supporting business users.
Operating Systems: Experience in Linux/Unix OS services, administration, shell, and awk scripting.
Programming: Strong knowledge of at least one programming language Python, Scala, Java, or R with debugging skills.
Hadoop Ecosystem: Experience in Hadoop components MapReduce, Hive, Pig, Spark, Kafka, HBase, HDFS, H-Catalog, Zookeeper, and Oozie/Airflow.
Security: Experience in Hadoop security including Kerberos, Knox, and TLS.
Databases: Hands-on experience with SQL and NoSQL databases (HBase), including performance optimization.
Tools & Automation: Experience in tool integration, automation, and configuration management using GIT and Jira platforms.
Soft Skills: Excellent oral and written communication, presentation skills, and strong analytical and problem-solving abilities.

Regards,

Radiantze Inc

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.