Java Spark Developer

Overview

On Site
BASED ON EXPERIENCE
Full Time
Contract - Independent
Contract - W2

Skills

Continuous Integration
Reporting
Data Extraction
PL/SQL
Workflow
Reliability Engineering
Data Governance
Data Engineering
SQL
Java
Apache Spark
Relational Databases
Oracle
NoSQL
Database
Apache Cassandra
Scheduling
Grafana
BMC Control-M
Extract
Transform
Load
Apache Airflow
Cloud Computing
Amazon Web Services
Data Modeling
Performance Tuning
Splunk
Dynatrace
Continuous Integration and Development
Continuous Delivery
Databricks
ICE
Tableau
Application Development
Software Modernization
Process Outsourcing
IT Service Management

Job Details

Job description:-

Job Title: Java Spark Developer


Experience Level: [8-12 years]
Location: Wilmington DE ( 5 Days Onsite every Week)

JOB Description Engineering / Technology:
Key responsibilities include developing efficient data workflows, monitoring database performance, ensuring data governance and security, and working with cloud platforms (AWS, AWS Glue). Familiarity with scheduling tools like Grafana and Control-M, pipeline frameworks such as Apache Airflow, and monitoring tools like Splunk and Dynatrace is essential. Experience with CI/CD pipelines (Jules, True CD) and knowledge of Databricks or Iceberg are a plus. Exposure to Tableau reporting will be an advantage.

Required qualifications, capabilities, and skills:

" Develop and optimize data pipelines using Java Spark and Photon for efficient data extraction, transformation, and loading (ETL) processes.
" Experience with backend Delta lake, PL/SQL
" Write clean, efficient, and well-documented code to automate data workflows and integrate with various data sources.
" Monitor and troubleshoot database performance, ensuring optimal query execution and system reliability.
" Implement data governance and security practices to comply with organizational and regulatory standards.
" 5+ years of experience in data engineering with strong proficiency in SQL and Java Spark.
" Hands-on experience with relational databases (e.g., Oracle, AWS Aurora) and NoSQL databases (e.g., Cassandra).
" Experience with scheduling and monitoring tools Grafana, Control-M
" Familiarity with ETL tools, data pipeline frameworks (e.g., Apache Airflow), and cloud platforms (AWS) and AWS Glue
" Knowledge of data modelling, schema design, and performance optimization techniques.
" Experience with monitoring tools Splunk and Dynatrace
" Experience with CICD pipeline like Jules and true CD
" Good to have experience Databricks / ice-berg
" Familiarity with Tableau reports is a plus"

Additional Job Details:

About Tanisha Systems, Inc.

Tanisha Systems, founded in 2002 in Massachusetts-*, is a leading provider of Custom Application Development and end-to-end IT Services to clients globally. We use a client-centric engagement model that combines local on-site and off-site resources with the cost, global expertise and quality advantages of off-shore operations. We deliver Custom Application Development, Application Modernization, Business Process Outsourcing and Professional IT Services from office locations in * and *.
Tanisha Systems services clients in Government, Banking & Financial Markets, Insurance, Healthcare, Retail & Consumer Goods, Energy & Utilities, Life Sciences, Telecom, Manufacturing and Transportation Industries around the globe. Our engagement model provides a flexible operational environment that empowers our clients with the right levels of control.

Want to read more about Tanisha Systems? Visit us at

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.