W2-Hybrid :: Big Data Engineer / Developer

Overview

Hybrid
Depends on Experience
Full Time
50% Travel

Skills

Big Data
Spark
Scala
GCP
Azure
AWS
Azure Data Lake Storage

Job Details

Big Data Engineer / Developer

6+Months

Charlotte, NC (Hybrid)

**Core Requirements**

- At least 6 years' proficiency in Big Data, Spark, Scala, and Cloud technologies
- Proven expertise in utilizing Spark and Scala, as well as familiarity with leading cloud services providers, notably AWS, Azure, and Google Cloud Platform for Big Data operations
- Advanced knowledge and operational skills with Azure Data Lake Storage (DLS)
- Demonstrable proficiency in using Databricks for data integration and pipeline creation
- Prior hands-on experience with Hadoop ecosystem components

**Desired Technical Experience**:

- Strong background in backend software development, with a preference for candidates who have demonstrated experience in Java, specifically within the Spring framework ecosystem

**Working Approach**:

- Experience working within Agile project delivery environments, capable of adapting to flexible methodologies and collaborative team dynamics

**Additional Skills**:

- Key Competency: Ability to work with Spark and Scala
- Familiarity with major Cloud platforms for data processing and storage (AWS, Azure, Google Cloud Platform)
- In-depth experience with Azure DLS
- Practical experience in managing and developing Databricks, as well as designing efficient data pipelines
- Prior utilization of Hadoop framework
- Backend development prowess, mainly in Java using the Spring framework
- Agile project delivery acumen

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.