Overview
On Site
$30 - $40
Contract - Independent
Contract - W2
Skills
Agile
Amazon Kinesis
Amazon S3
Amazon Web Services
Analytical Skill
Analytics
Apache Avro
Apache Kafka
Apache Parquet
Apache Spark
Attention To Detail
Big Data
Cloud Computing
Communication
Conflict Resolution
Continuous Delivery
Continuous Integration
Data Engineering
Data Extraction
Data Governance
Data Modeling
Data Processing
Distributed Computing
Documentation
Electronic Health Record (EHR)
Extract
Transform
Load
FOCUS
File Formats
Finance
GitHub
JSON
Jenkins
Management
Meta-data Management
Migration
Orchestration
Problem Solving
PySpark
Python
Regulatory Compliance
SQL
Scrum
Soft Skills
Step-Functions
Stored Procedures
Streaming
Supervision
Terraform
Workflow
Job Details
We are seeking an experienced AWS Data Engineer to support large-scale data processing, migration, and analytics initiatives for Vanguard. The ideal candidate will have strong hands-on experience in AWS cloud services, ETL development, big data processing, and Python/Spark. Candidates must have strong data engineering principles and the ability to work collaboratively in an enterprise financial environment.
Key Responsibilities:
Required Technical Skills:
Preferred Skills:
Soft Skills:
Why Join:
Key Responsibilities:
- Design, develop, and optimize AWS-based data pipelines for large-scale data ingestion and processing.
- Build scalable ETL workflows using Python, PySpark, Glue, EMR, and Step Functions.
- Develop, maintain, and optimize S3 data lakes, ensuring efficient partitioning, metadata management, and data governance.
- Implement data transformations using AWS Glue, Lambda, PySpark, and Spark SQL.
- Perform data extraction, cleansing, validation, and quality checks.
- Support migration initiatives from on-prem data systems to AWS cloud.
- Build reusable, modular, and high-performance ETL components.
- Work with internal teams to integrate data from multiple sources and ensure data consistency.
- Monitor pipeline performance, troubleshoot issues, and implement continuous improvements.
- Maintain documentation, follow Agile practices, and ensure compliance with security and governance standards.
Required Technical Skills:
- 3 5+ years of hands-on experience with AWS Data Engineering.
- Strong experience with AWS Glue, S3, EMR, Lambda, IAM, Step Functions.
- Expertise in Python and PySpark for ETL and data processing.
- Strong SQL experience (analysis, tuning, stored procedures).
- Knowledge of data lakehouse concepts, partitioning strategies, schema management, and versioning.
- Experience with ETL development and large-scale data ingestion frameworks.
- Familiarity with Athena, Glue Catalog, and CloudWatch.
- Strong understanding of data modeling, file formats (Parquet, Avro, JSON), and distributed computing.
Preferred Skills:
- Experience working for financial clients such as Vanguard (highly preferred).
- Experience with Airflow, Kafka, or Kinesis for orchestration/streaming.
- Exposure to Terraform/CloudFormation for Infrastructure as Code (IaC).
- Familiarity with CI/CD tools such as GitHub, Jenkins, or CodePipeline.
- Experience in Agile/Scrum environments.
Soft Skills:
- Excellent communication skills to coordinate across distributed teams.
- Strong analytical, troubleshooting, and problem-solving abilities.
- Ability to work independently with minimal supervision.
- Detail-oriented with a focus on reliability and performance.
Why Join:
- Opportunity to work on large-scale AWS-based enterprise data platforms.
- Long-term engagement with a leading financial client.
- Work with modern cloud-native technologies in a collaborative environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.