Databricks Developer with Apache Spark

Overview

On Site
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 13 day((s))

Skills

AWS
Python
Azure
SQL
scala
Apache Spark
PySpark
databricks
data pipelines
Salesforce Integration
SAP Integration
ETL processes implementation
medallion architecture
data warehousing concepts
Delta Sharing

Job Details

Role: Databricks Developer with Apache Spark

Location: Charlotte, NC (Day 1 Onsite)

Job Type: Contract

Description:

  • The candidate will be responsible for designing, implementing, and optimizing data solutions using Databricks platform.

Key Responsibilities:

  • Develop, maintain, and optimize data pipelines, workflows using Databricks.
  • Integrate with third party applications like Salesforce, SAP, and external file feed.
  • Implement ETL processes and ensure data quality and integrity.
  • Design and implement scalable data architectures and workflows.
  • Perform data analysis and generate actionable insights.
  • Monitor and troubleshoot performance issues.

Required Skills and Qualifications:

  • Proven experience with Databricks and Apache Spark.
  • Strong experience in implementing medallion architecture.
  • Strong understanding of data warehousing concepts and ETL processes.
  • Strong experience in Delta Sharing with Snowflake, Fabric OneLake and AWS.
  • Proficiency in programming languages such as Python, PySpark, Scala and SQL.
  • Experience with cloud platforms like Azure and AWS.

Preferred Qualifications:

  • Experience with machine learning and data science.
  • Certifications in Databricks or cloud platforms.
  • Knowledge of SQL and NoSQL databases.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.