Overview
On Site
Depends on Experience
Contract - W2
Skills
Scala
PySpark
Python
Apache Hadoop
Java
Apache Spark
Amazon Web Services
Job Details
Role: Data Engineer
Location: New Jersey, USA
Note: #W2
Needs:
Hadoop
AWS
Spark
PySpark
Java
Python
Scala
Job Desciption: Key Skills:
Big Data Technologies: Hadoop, Spark, HDFS, Hive, Cloudera, Hortonworks
Cloud Platforms: AWS (Glue, Lambda, Redshift, S3, CloudWatch)
ETL/ELT Tools: AWS Glue, Python, PySpark, Databricks
Programming Languages: Python, Java, Scala, SQL, HiveQL
Data Integration & Migration: Experience with Hadoop, Kafka, data lakes, and real-time streaming
Data Modeling & Transformation: Dimensional data models, structured/unstructured data processing
CI/CD & Automation: Jenkins, Git, Autosys, Airflow
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.