DTS is looking for Hadoop Tech Lead for a long term contract with our direct client Position in Charlotte, NC.
No third party C2C allowed. Candidates have to work directly with us on our W-2.
- The Big Data Systems Engineer/Hadoop Systems Administrator is responsible for day-to-day cluster setup, management support and testing
- In depth knowledge managing Big Data ecosystem including the following technology(HDFS, Spark, Impala, Hive, Flume, CDSW, ETC) and Cloudera Manager.
- 4-10 Years experience in Administration and scaling Hadoop clusters.
- Building the next generation solutions to include emerging technology including Machine learning, AI, Data science and hybrid cloud computing.
- Principal Accountabilities:
- Implements infrastructure technology requested by business units.
- Engineers technology solutions supporting the infrastructure environments
- Identifies, documents, manages, and resolves issues in a big data environment.
- Provides root cause analysis.
- Prioritizes, defines, assigns, validates, and closes routine system requests.
- Strong knowledge of Splunk tool integration.
- Provides technical expertise and advice to data scientists, data engineers, Agile squads and other departments.
- Provides setup and support for Disaster Recovery environment.
- Develops and maintains system support procedures and documentation.
- Creates and manages scripts for Linux management.
- Monitors and applies patches as required for best practices on all platforms ecosystem components and applications.
- After-hours support is required for this position.
- Other duties as assigned Minimum Requirements:
- Knowledge of standards in IT engineering and infrastructure
- Ability to interpret new directions in technology in a way that creates greater clarity for team members
- Understanding of enterprise platforms and operating systems
- Ability to manage the operations, availability and performance of a data platform including assessment of completion of risks analysis
- Ability to mentor, supervise and direct the work of lower level staff
- Command of UNIX operating systems including RHEL6/7
- Experience working with Linux in a virtual environment (KVM, Kubernetes/Docker or VMware preferred)
- Volume management experience, Experience with utilizing SAN storage
- Scripting Experience (Python, Perl, KSH, etc)
- Experience supporting web farms with Apache and/or Tomcat
- Experience supporting Cloudera Hadoop, Flume, HDFS, Hive and impala.
- Experience with SSL termination and SSL certificates
- Server and application monitoring
- Understanding of a multi-tier infrastructure design
- Ability to work well within an Agile` team and corporate environment
- Ability to analyze and troubleshoot problems effectively as well as provide solutions
- Ability to respond to 24x7 for production support escalations when required
- Experience with Change Management, Incident Management and audit compliance for SOX applications
- Experience supporting Disaster Recovery
- Strong Knowledge of Agile/DevOps(CI-CD) SDLC processes.
Please forward your resume to email@example.com
Contact Ajay @ 248-243-1381