Overview
Skills
Job Details
Java Back End Engineer (Ex-Capital One)
Skills: Java, ETL, Spark, AWS, Glue
Location:McLean,VA
Employment: W2
Job Summary
We are seeking a highly experienced Senior Java Back-End Engineer with strong expertise in ETL pipelines, Apache Spark, AWS services, and AWS Glue. The ideal candidate will have experience working in Capital One or similar financial enterprise environments, with deep understanding of distributed data processing, microservices, and cloud-native development.
Required Qualifications
10+ years of overall IT experience with strong hands-on backend development.
8+ years of Core Java development (Java 8/11/17) including multithreading, collections, and OOP design.
Strong experience building APIs and microservices using Spring Boot.
Hands-on ETL experience designing and developing large-scale data pipelines.
3 5+ years of Apache Spark for distributed data processing (RDDs, DataFrames, Spark SQL).
Strong experience with AWS:
AWS Glue (ETL development)
Lambda
S3
EMR
RDS
CloudWatch
IAM
Experience developing serverless ETL jobs with AWS Glue and Python/Scala/Java.
Strong understanding of data migration workflows, schema transformations, and performance tuning.
Experience with CI/CD pipelines (Jenkins, GitHub Actions, CodePipeline, etc.).
Experience working in an Agile/Scrum environment.
Strong communication and ability to work in fast-paced financial environments.
Preferred Qualifications
Prior experience at Capital One or large financial institutions.
Experience with Kafka / event-driven architectures.
Hands-on experience with SQL, NoSQL, and relational databases.
Experience with Terraform or CloudFormation for IaC (nice to have).
Experience with Airflow or other workflow orchestrators.
Knowledge of cloud security and compliance frameworks.
Key Responsibilities
Develop and maintain Java-based microservices and backend applications.
Design, build, and optimize ETL pipelines using Spark and AWS Glue.
Work with cross-functional teams to support data migration and transformation initiatives.
Integrate distributed data systems and ensure scalability and reliability.
Optimize Spark jobs for performance and cost efficiency on AWS.
Develop reusable frameworks for ingestion, transformation, and processing of data.
Collaborate with architects and business stakeholders to gather requirements and deliver solutions.
Participate in code reviews, performance tuning, and production support.
Work within Agile teams following best engineering practices.