Spark Optimization Specialists

Overview

Remote
Depends on Experience
Contract - W2
Contract - 8 Month(s)

Skills

Amazon Web Services
Data Engineering
Databricks
Apache Spark
Google Cloud Platform

Job Details

Total experience must be around 15+ years

4-5 years of Strong Databricks skills needed.

Deep Expertise in Databricks performance optimization.

Strong data engineering skills

Experience with startup environments and open-source contributions will be a major plus

Candidate can bring a good blend of consulting and engineering skills

Overall-Strong Databricks skills and robust data engineering experience, we are looking for top 1 percentile engineers (MVP Candidates Preferred)

Qualifications:

Proven experience with Apache Spark performance tuning and Databricks workload optimization.

Strong knowledge of Spark SQL, DataFrame APIs, and Delta Lake.

Experience in resource tuning, including cluster configuration, autoscaling, and job scheduling.

Proficiency in at least one programming language (e.g., Python, Scala, or SQL).

Experience with cloud platforms (AWS, Azure, or Google Cloud Platform), particularly in relation to Databricks deployments.

Familiarity with monitoring and observability tools for Spark jobs (e.g., Ganglia, Datadog, CloudWatch, Spark UI).

Strong problem-solving skills and the ability to work in a cross-functional team.

Excellent communication and documentation skills.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.