Databricks Developer

Overview

Remote
Depends on Experience
Contract - W2
Contract - 24 Month(s)

Skills

Apache Spark
Databricks
Java

Job Details

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
  • Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
  • Optimize Spark jobs for performance, reliability, and cost-efficiency
  • Collaborate with cross-functional teams to gather requirements and deliver data solutions
  • Ensure compliance with data security, privacy, and governance standards
  • Troubleshoot and debug production issues in distributed data environments
  • Bachelor s degree in Computer Science, Information Systems, or a related field.
  • 8+ years of professional experience demonstrating the required technical skills and responsibilities listed
  • Strong expertise in Java 8 or higher
  • Experience with functional programming (Streams API, Lambdas)
  • Familiarity with object-oriented design patterns and best practices
  • Proficient in Spark Core, Spark SQL, and DataFrame/Dataset APIs
  • Understanding of RDDs and when to use them
  • Experience with Spark Streaming or Structured Streaming
  • Skilled in performance tuning and Spark job optimization
  • Ability to use Spark UI for troubleshooting stages and tasks
  • Familiarity with HDFS, Hive, or HBase
  • Experience integrating with Kafka, S3, or Azure Data Lake
  • Comfort with Parquet, Avro, or ORC file formats
  • Strong understanding of batch and real-time data processing paradigms
  • Experience building ETL pipelines with Spark
  • Proficient in data cleansing, transformation, and enrichment
  • Experience with YARN, Kubernetes, or EMR for Spark deployment
  • Familiarity with CI/CD tools like Jenkins or GitHub Actions
  • Monitoring experience with Grafana, Prometheus, Datadog, or Spark UI logs
  • Proficient in Git
  • Experience with Maven or Gradle
  • Unit testing with JUnit or TestNG
  • Experience with Mockito or similar mocking frameworks
  • Data validation and regression testing for Spark jobs
  • Experience working in Agile/Scrum environments
  • Strong documentation skills (Markdown, Confluence, etc.)
  • Ability to debug and troubleshoot production issues effectively

Preferred Qualifications:

  • Experience with Scala or Python in Spark environments
  • Familiarity with Databricks or Google DataProc
  • Knowledge of Delta Lake or Apache Iceberg
  • Experience with data modeling and performance design for big data systems

Nice to have skills:

  • Experience with Scala or Python in Spark environments
  • Familiarity with Databricks or Google DataProc
  • Knowledge of Delta Lake or Apache Iceberg
  • Data modeling and performance design for big data systems.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.