Hadoop Engineer

  • Charlotte, NC
  • Posted 15 hours ago | Updated 8 hours ago

Overview

On Site
$65.0000 - $70.0000
Full Time

Skills

HDFS
Hadoop
Cloudera
Unix
SQL

Job Details

Details:

Client: Bank

Job Title: Hadoop Engineer

Location: Chicago IL and Charlotte, NC - Onsite

Duration: 12 Months (Extension/Conversion will be based on performance)

Pay Range: $ 65 - 70/HR

Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time , paid sick and safe time , hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable)

Mission:


Seeking a Hadoop Engineer (SME) to support our NextGen Platforms built around Big Data Technologies, including Hadoop, Spark, Kafka, Impala, Hbase, Docker-Container, Ansible on a project that will focus on platform modernization and consolidation


Day-to-Day:



  • Work on complex, major, or highly visible tasks in support of multiple projects requiring multiple areas of expertise. - Provide subject matter expertise in managing Hadoop and Data Science Platform operations, focusing on Cloudera Hadoop, Jupyter Notebook, OpenShift, Docker-Container Cluster Management, and Administration. -

  • Integrate solutions with other applications and platforms outside the framework.

  • Manage day-to-day operations for platforms built on Hadoop, Spark, Kafka, Kubernetes/OpenShift, Docker/Podman, and Jupyter Notebook.

  • Support and maintain AI/ML platforms such as Cloudera, DataRobot, C3 AI, Panopticon, Talend, Trifacta, Selerity, ELK, KPMG Ignite, and others.

  • Automate platform tasks using tools like Ansible, shell scripting, and Python.


Must Haves:



  • Strong knowledge of Hadoop Architecture, HDFS, Hadoop Cluster, and Hadoop Administrator's role

  • Intimate knowledge of fully integrated AD/Kerberos authentication.

  • Experience setting up optimum cluster configurations

  • Expert-level knowledge of Cloudera Hadoop components such as HDFS, Sentry, HBase, Kafka, Impala, SOLR, Hue, Spark, Hive, YARN, Zookeeper, and Postgres.

  • Hands-on experience analyzing various Hadoop log files, compression, encoding, and file formats.

  • Strongly Proficiency with Unix/SQL scripting

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.