Overview
Hybrid
Depends on Experience
Full Time
Skills
Hadoop
HDFS
Ozone
Apache Hadoop
Hive
Apache Spark
Big Data
Apache Hive
Data Warehouse
ELT
Extract
Transform
Load
Java
JIRA
Python
Scala
Shell Scripting
Unix
Microsoft Azure
PL/SQL
Job Details
This is a full-time position, and we are seeking candidates with above 11+ years of IT experience. Only full-time applicants will be considered for this role.
Job Description
We are looking for a Hadoop Engineer with expertise in HDFS/Ozone, Hive, Spark (Python/Scala/Java), SparkUI, and Unix shell scripting. The candidate should have a strong understanding of Data Warehouse, Data Lake, and Lake House ETL/ELT concepts.
Role & Responsibilities:
- Support Big data ETL platform built on top of Hadoop
Required Skills:
- Hadoop (HDFS/Ozone, Hive), Spark(Python/ Scala/ Java), SparkUI, Unix shell scripting technologies
- Understanding of Data Warehouse/Data Lake/Lake House related ETL /ELT concepts, data quality, governance and performance tuning aspects.
- Strong analytical and problem-solving skills.
Desired Skills:
- Familiarity with ITSM tools like Remedy, JIRA. Understanding of Work Order(WO), incident(INC), problem(PBI), and change(CRQ) management.
- Knowledge of Python, JupyterNotes, NiFi, NiFi Registry, Oracle (SQL, PL-SQL), C, DMX/Syncsort,, CI/CD (GIT, Jenkins/ Chef), AirFlow, Kafka/Axon Streaming
- Exposure to cloud platforms (Azure, AWS, or Google Cloud Platform) and tools like Databricks is a plus.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.