Overview
Hybrid
Depends on Experience
Full Time
No Travel Required
Unable to Provide Sponsorship
Skills
Apache HBase
Apache Hadoop
Apache Hive
Apache Kafka
Apache Spark
Apache Sqoop
Batch Processing
Big Data
CA Workload Automation AE
Cloudera
Cloudera Impala
Computer Science
Continuous Delivery
Continuous Integration
Data Engineering
Data Management
Data Processing
Database
Elasticsearch
Extract
Transform
Load
Git
HDFS
Internet
J2EE
Java
Jenkins
Job Scheduling
NoSQL
Oracle Linux
Performance Analysis
Programming Languages
SQL
Scala
Software Development
Software Engineering
Job Details
Hello,
My name is Sanket Parate, and I work as a Technical Recruiter for K-Tek Resourcing.
We are searching for Professionals below business requirements for one of our clients. Please read through the requirements and connect with us in case it suits your profile.
Job title: Bigdata Developer
Location: Jersey City NJ (Onsite/Hybrid)
Type of Job : Fulltime
Job Description:
Bachelor s Degree or Master s in Computer Science, Engineering, Software Engineering or a relevant field.
Around 8-10 years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems.
Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation
Strong experience in programming languages Java/J2EE/Scala.
Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database.
Experience with Batch Processing and AutoSys Job Scheduling and Monitoring
Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs)
Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed
Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc
Knowledge of Hadoop Security, Data Management and Governance
Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD, etc.
Bachelor s Degree or Master s in Computer Science, Engineering, Software Engineering or a relevant field.
Around 8-10 years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems.
Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation
Strong experience in programming languages Java/J2EE/Scala.
Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database.
Experience with Batch Processing and AutoSys Job Scheduling and Monitoring
Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs)
Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed
Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc
Knowledge of Hadoop Security, Data Management and Governance
Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD, etc.
Thanks and Regards
Sanket Parate
Talent Acquisition Specialist, KTEK Resourcing
Email -
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.