Overview
Remote
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
Collaboration
Continuous Delivery
Data Security
Data Structure
Data Validation
Databricks
Continuous Integration
Data Cleansing
Data Lake
Data Modeling
Data Processing
Apache Kafka
Apache Maven
Apache Spark
Big Data
Build Tools
API
Agile
Functional Programming
Git
GitHub
Java
Jenkins
Debugging
Documentation
Electronic Health Record (EHR)
Extract
Transform
Load
Amazon S3
Analytics
Apache HTTP Server
Performance Tuning
Apache Hive
Confluence
Kubernetes
Microsoft Azure
OOD
Optimization
Privacy
Python
Real-time
Regression Testing
Regulatory Compliance
SQL
Scala
Scrum
Streaming
TestNG
UI
Unit Testing
Job Details
W2 Only...
Job Title:- Databricks Developer With Java Spark
Location:- Remote
Duration:- Contract
Job Description
Required skills/Level of Experience:
We are seeking a Databricks Developer with deep expertise in Java and Apache Spark, along with hands-on
experience working with IRS data systems such as IRMF, BMF, or IMF. The ideal candidate will be responsible
for designing, developing, and optimizing big data pipelines and analytics solutions on the Databricks platform. This role requires a deep understanding of distributed data processing, performance tuning, and scalable architecture.
experience working with IRS data systems such as IRMF, BMF, or IMF. The ideal candidate will be responsible
for designing, developing, and optimizing big data pipelines and analytics solutions on the Databricks platform. This role requires a deep understanding of distributed data processing, performance tuning, and scalable architecture.
Key Responsibilities:
Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
Integrate with IRS data systems including IRMF, BMF, or IMF
Optimize Spark jobs for performance, reliability, and cost-efficiency
Collaborate with cross-functional teams to gather requirements and deliver data solutions
Ensure compliance with data security, privacy, and governance standards
Troubleshoot and debug production issues in distributed data environments
Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
Integrate with IRS data systems including IRMF, BMF, or IMF
Optimize Spark jobs for performance, reliability, and cost-efficiency
Collaborate with cross-functional teams to gather requirements and deliver data solutions
Ensure compliance with data security, privacy, and governance standards
Troubleshoot and debug production issues in distributed data environments
Required Skills & Qualifications: Active IRS MBI Clearance Required. IRS issued laptops strongly preferred. Please provide a copy of the candidate s active MBI letter.
Bachelor s degree in Computer Science, Information Systems, or a related field.
8+ years of professional experience demonstrating the required technical skills and responsibilities listed:
Bachelor s degree in Computer Science, Information Systems, or a related field.
8+ years of professional experience demonstrating the required technical skills and responsibilities listed:
IRS Data Systems Experience
Hands-on experience working with IRS IRMF, BMF, or IMF datasets
Understanding of IRS data structures, compliance, and security protocols
Hands-on experience working with IRS IRMF, BMF, or IMF datasets
Understanding of IRS data structures, compliance, and security protocols
Programming Language Proficiency
Strong expertise in Java 8 or higher
Experience with functional programming (Streams API, Lambdas)
Familiarity with object-oriented design patterns and best practices
Strong expertise in Java 8 or higher
Experience with functional programming (Streams API, Lambdas)
Familiarity with object-oriented design patterns and best practices
Apache Spark
Proficient in Spark Core, Spark SQL, and DataFrame/Dataset APIs
Understanding of RDDs and when to use them
Experience with Spark Streaming or Structured Streaming
Skilled in performance tuning and Spark job optimization
Ability to use Spark UI for troubleshooting stages and tasks
Proficient in Spark Core, Spark SQL, and DataFrame/Dataset APIs
Understanding of RDDs and when to use them
Experience with Spark Streaming or Structured Streaming
Skilled in performance tuning and Spark job optimization
Ability to use Spark UI for troubleshooting stages and tasks
Big Data Ecosystem
Familiarity with HDFS, Hive, or HBase
Experience integrating with Kafka, S3, or Azure Data Lake
Comfort with Parquet, Avro, or ORC file formats
Familiarity with HDFS, Hive, or HBase
Experience integrating with Kafka, S3, or Azure Data Lake
Comfort with Parquet, Avro, or ORC file formats
Data Processing and ETL
Strong understanding of batch and real-time data processing paradigms
Experience building ETL pipelines with Spark
Proficient in data cleansing, transformation, and enrichment
Strong understanding of batch and real-time data processing paradigms
Experience building ETL pipelines with Spark
Proficient in data cleansing, transformation, and enrichment
DevOps / Deployment
Experience with YARN, Kubernetes, or EMR for Spark deployment
Familiarity with CI/CD tools like Jenkins or GitHub Actions
Monitoring experience with Grafana, Prometheus, Datadog, or Spark UI logs
Experience with YARN, Kubernetes, or EMR for Spark deployment
Familiarity with CI/CD tools like Jenkins or GitHub Actions
Monitoring experience with Grafana, Prometheus,
Version Control & Build Tools
Proficient in Git
Experience with Maven or Gradle
Proficient in Git
Experience with Maven or Gradle
Testing
Unit testing with JUnit or TestNG
Experience with Mockito or similar mocking frameworks
Data validation and regression testing for Spark jobs
Experience with Mockito or similar mocking frameworks
Data validation and regression testing for Spark jobs
Soft Skills / Engineering Practices
Experience working in Agile/Scrum environments
Strong documentation skills (Markdown, Confluence, etc.)
Ability to debug and troubleshoot production issues effectively Preferred Qualifications:
Experience with Scala or Python in Spark environments
Familiarity with Databricks or Google DataProc
Knowledge of Delta Lake or Apache Iceberg
Experience with data modeling and performance design for big data systems
Strong documentation skills (Markdown, Confluence, etc.)
Ability to debug and troubleshoot production issues effectively Preferred Qualifications:
Experience with Scala or Python in Spark environments
Familiarity with Databricks or Google DataProc
Knowledge of Delta Lake or Apache Iceberg
Experience with data modeling and performance design for big data systems
Nice to have skills:
Experience with Scala or Python in Spark environments
Familiarity with Databricks or Google DataProc
Knowledge of Delta Lake or Apache Iceberg
Data modeling and performance design for big data systems.
Experience with Scala or Python in Spark environments
Familiarity with Databricks or Google DataProc
Knowledge of Delta Lake or Apache Iceberg
Data modeling and performance design for big data systems.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.