Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)
Skills
Amazon Kinesis
Amazon Lambda
Apache Hadoop
Apache Hive
Apache Kafka
Apache Spark
Big Data
Cloudera Impala
Electronic Health Record (EHR)
HDFS
PySpark
Python
SQL
Scala
Streaming
Writing
Job Details
Sr Big Data Engineer with Hadoop, Scala, Spark, Python, Kafka
Required Skills:
Experience in Hadoop ecosystem components: HIVE, Pyspark, HDFS, SPARK, Scala, Streaming,(kinesis, Kafka)
Strong experience in PySpark, Phython development
Proficient with writing Hive and Impala Queries
Ability to write complex SQL queries
Experience with AWS Lambda, EMR, Clusters, Partitions, Datapipelines
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.