Big Data, Hadoop/Data Warehouse Specialist

Big data, Apache Hadoop, Apache Kafka, ETL, Data warehouse, Oracle, HDFS, Apache Hive, Apache Sqoop, Apache ZooKeeper
Contract W2, Contract Corp-To-Corp, 6 Months
Depends on Experience
Travel not required

Job Description

Interview: Video Interview

 

Skills Required: 

  • 6+ years of experience with Big Data, Hadoop on Data Warehousing or Data Integration projects.
  • Analysis, Design, development, support and Enhancements of ETL/ELT in data warehouse environment with Cloudera Bigdata Technologies (Hadoop, MapReduce, Sqoop, PySpark, Spark, HDFS, Hive, Impala, StreamSets, Kudu, Oozie, Hue, Kafka, Yarn, Python, Flume, Zookeeper, Sentry, Cloudera Navigator) along with Oracle SQL/PL-SQL, Unix commands and shell scripting;
  • Strong development experience in creating Sqoop scripts, PySpark programs, HDFS commands, HDFS file formats (Parquet, Avro, ORC etc.), StreamSets pipeline creation, jobs scheduling, hive/impala queries, Unix commands, scripting and shell scripting etc.
  • Writing Hadoop/Hive/Impala scripts for gathering stats on table post data loads.
  • Strong SQL experience (Oracle and Hadoop (Hive/Impala etc.)).
  • Writing complex SQL queries and performed tuning based on the Hadoop/Hive/Impala explain plan results.
  • Proven ability to write high quality code.
  • 6+ years of experience building data sets and familiarity with PHI and PII data.
  • Expertise implementing complex ETL/ELT logic.
  • Develop and enforce strong reconciliation process.

 

 

 

Dice Id : 10121591
Position Id : RY78373
Originally Posted : 2 months ago
Have a Job? Post it