Overview
Hybrid
$40 - $60
Contract - Independent
Contract - W2
Skills
hadoop
HDFS
Job Details
Hadoop Admin profiles,
they need to work out Whippany NJ location.
- No of positions: 02
- The roles will be working on shifts i.e. 12:30 PM to 9:30 PM EST and at times they will have to work on weekends as well.
- But overall, 5 days in week. If working on weekend, he/she will be taking off on the weekdays.
- Role needs to come to office at least 2 days in the week to Whippany NJ location.
IBM Rate: $55- W2/hr
A skilled and experienced Hadoop administrator manages and maintains Open-Source Hadoop clusters, a distributed computing framework for processing large datasets. This involves tasks like installing, configuring, monitoring, and troubleshooting Hadoop clusters and related resources. They ensure the Hadoop ecosystem operates smoothly, efficiently, and securely.
Skills Required:
- Proven hands-on experience with Hadoop ecosystem (HDFS, YARN, MapReduce, HBase)
- Strong understanding of open-source Hadoop distributions
- Knowledge of open-source configuration management tools (e.g., Puppet, Chef).
- Ability to independently conduct pre and post change validations, Log Analysis, volume/block checks and skew troubleshooting
- Experience working in change-controlled production environment
- Excellent communication skills to interface across infrastructure and data teams
- Strong Linux skills.
- Experience with Hadoop and related technologies
- Quick Learner with the ability to complex environments
- Understanding of networking concepts.
- Troubleshooting and problem-solving skills.
Core Responsibilities:
- Removal of 21-30 Servers from the cluster in a controlled manner to avoid performance impact and loss of data.
- Comprehensive Hadoop level pre and post checks, including Cluster Health validation, HDFS blocks/volume usage and replication checks, Skew Analysis, log monitoring and alerting.
- Coordination with Data Center Operation teams, for physical disk installation in server slots and Unix System Admin for OS-Level Configuration, disk mounting and server rebuilds.
- Post Rebuilds: Hadoop team to configure Hadoop services on rebuilt nodes and reintegrate nodes backs into cluster
- This cycle will continue as weekly cadence
- Cluster Management: Installing, configuring, and managing Hadoop clusters, including adding and removing nodes, monitoring performance, and troubleshooting issues.
- HDFS Management: Managing Hadoop Distributed File System, including allocating storage quotas, balancing data, and managing snapshots.
- Security: Implementing and maintaining security measures, such as Kerberos authentication and access control lists.
- Performance Tuning: Optimizing Hadoop clusters for performance and scalability.
- Resource Management: Managing other components in the Hadoop ecosystem, such as Hive, Pig, and HBase.
- Data Ingestion: Utilizing tools like Sqoop and Flume to move data into and out of Hadoop.
- Monitoring and Alerting: Setting up monitoring systems to track cluster health and performance and configure alerts for potential issues.
- Collaboration: Working with other teams, such as data engineers and developers, to ensure seamless integration and operation of Hadoop.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.