Overview
On Site
Depends on Experience
Contract - W2
Skills
Apache Hadoop
Apache Hive
Big Data
Cloud Computing
Cloudera
GitHub
MongoDB
NoSQL
PySpark
Python
SQL
Scala
Shell Scripting
Streaming
HDFS
Microsoft Azure
Data Modeling
Google Cloud Platform
Unix
Microsoft SQL Server
Continuous Delivery
Apache Spark
Apache Kafka
Continuous Integration
Data Warehouse
Good Clinical Practice
Job Details
Job Role: Big Data Engineer
Location: Charlotte, NC, Columbus, OH, and Dallas, TX - Hybrid Role
Duration: 12 Months Contract
Responsibilities:
- Spark Processing engine.
- Big data tools / technologies/ Streaming (Hive, Kafka)
- Data Modeling.
- Experience analyzing data to discover opportunities and address gaps.
- Experience working with cloud or on-prem Big Data platform(i.e. Google BigQuery, Azure Data Warehouse, or similar).
- Programming experience in Python.
Skills:
- Hadoop, Hive, Spark, Cloudera, SQL, NoSQL, Python, CI/CD, Python, Python Frameworks, Google Cloud Platform, SQL.
- Experience with Hadoop Components including HDFS Spark Hive Scala Python PySpark and Mongo DB.
- Experience with SQL and SQL Server UNIX Shell Scripting and Github.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.