Overview
On Site
Hybrid
$60 - $65
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Able to Provide Sponsorship
Skills
Electronic Health Record (EHR)
Data Modeling
Data Processing
Data Warehouse
Continuous Delivery
Continuous Integration
Data Lake
Data Security
Cloud Architecture
Cloud Computing
Collaboration
Communication
Data Engineering
Amazon Redshift
Amazon S3
Amazon Web Services
Apache Kafka
Apache Spark
Encryption
Docker
Extract
Transform
Load
Innovation
GitHub
Kubernetes
Jenkins
Leadership
Management
Mentorship
Optimization
Orchestration
Payment Card Industry
Performance Tuning
PySpark
Python
Regulatory Compliance
SQL
Software Development
Step-Functions
Streaming
Terraform
Workflow
Job Details
Tech Lead Data Engineer (Capital One Project Experience Preferred)
We are seeking a Tech Lead Data Engineer with strong expertise in Python, Spark, and AWS to build and lead data engineering solutions in a large-scale enterprise environment.
This role requires hands-on technical depth and leadership to guide data teams, enforce best practices, and deliver highly scalable and reliable cloud-based data systems. Candidates with prior Capital One experience (direct or through consulting vendors) are highly preferred.
Key Responsibilities:
Required Skills:
Preferred Skills:
Why Join:
We are seeking a Tech Lead Data Engineer with strong expertise in Python, Spark, and AWS to build and lead data engineering solutions in a large-scale enterprise environment.
This role requires hands-on technical depth and leadership to guide data teams, enforce best practices, and deliver highly scalable and reliable cloud-based data systems. Candidates with prior Capital One experience (direct or through consulting vendors) are highly preferred.
Key Responsibilities:
- Lead the design, development, and optimization of enterprise-grade data pipelines and ETL workflows.
- Architect data lake and data warehouse solutions leveraging AWS services like S3, Glue, EMR, and Redshift.
- Build scalable data processing systems using Python and PySpark for both batch and streaming workloads.
- Implement and manage CI/CD pipelines for data workflows using Jenkins, GitHub Actions, or AWS CodePipeline.
- Collaborate closely with data scientists, analysts, and product teams to deliver efficient, reusable, and well-documented solutions.
- Mentor junior engineers and enforce data engineering standards, code reviews, and architecture best practices.
- Work in a hybrid setup (3 days onsite per week) at McLean, VA or Richmond, VA locations.
Required Skills:
- 8+ years of experience in Data Engineering or Software Development.
- Advanced proficiency in Python, PySpark, and Spark SQL.
- Deep experience in AWS Cloud S3, Glue, EMR, Lambda, Redshift, CloudFormation, and IAM.
- Strong understanding of data lakehouse concepts, schema management, and data modeling.
- Hands-on experience with ETL development, performance tuning, and large-scale data ingestion.
- Experience setting up CI/CD pipelines and automation frameworks.
- Excellent communication and leadership skills to coordinate across multiple teams.
Preferred Skills:
- Prior experience with Capital One (highly preferred).
- Experience with Kafka, Airflow, or AWS Step Functions.
- Familiarity with Terraform or CloudFormation for Infrastructure as Code.
- Experience with Docker, Kubernetes, or other container orchestration platforms.
- Working knowledge of data security, encryption, and compliance (PCI, GDPR).
Why Join:
- Opportunity to work on cutting-edge cloud-native data solutions at an enterprise scale.
- Collaborate with Capital One engineering teams known for innovation in data and cloud architecture.
- Long-term engagement with growth and leadership opportunities.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.