job summary:
Role: Data Engineer (Intermediate Level, Preferred Exp# 8+ Years)
Role Summary: The contractor will support the Middle Office Digital Activity team by building, enhancing, and maintaining scalable data products and pipelines on AWS. The role focuses on transforming and delivering reliable datasets using SQL, Python, and PySpark, with AWS Glue as the primary ETL tool.
Required Skillset:
SQL
* Robust proficiency in writing complex, performant queries
* Experience with analytical datasets and data transformations
Python
* Solid experience building data pipelines and reusable modules
* Familiarity with data processing libraries and structured code practices
PySpark
* Hands-on experience developing PySpark jobs for large-scale data processing
* Understanding of distributed data processing concepts
AWS Data Services
* AWS Glue (primary ETL tool - required)
* Experience working with S3-based data lakes
* Familiarity with IAM, job scheduling, and monitoring in AWS
Data Engineering Fundamentals
* ETL/ELT design patterns, Data Modelling Techniques.
* Data quality checks and basic validation frameworks
* Understanding of partitioning, schema evolution, and performance optimization
Optional Skillsets:
* Experience with Redshift or any other MPP database.
* Knowledge of Iceberg, or other open table formats
* Exposure to CI/CD for data pipelines.
* Experience supporting event-based design and systems preferably Kafka.
* Familiarity with data governance, lineage, or cataloging tools
Location: Malvern-PA
location: Malvern, Pennsylvania
job type: Solutions
salary: $51 - 56 per hour
work hours: 8am to 5pm
education: Bachelors
responsibilities:
Role: Data Engineer (Intermediate Level, Preferred Exp# 8+ Years)
Role Summary: The contractor will support the Middle Office Digital Activity team by building, enhancing, and maintaining scalable data products and pipelines on AWS. The role focuses on transforming and delivering reliable datasets using SQL, Python, and PySpark, with AWS Glue as the primary ETL tool.
Required Skillset:
SQL
- * Robust proficiency in writing complex, performant queries
- * Experience with analytical datasets and data transformations
Python
- * Solid experience building data pipelines and reusable modules
- * Familiarity with data processing libraries and structured code practices
PySpark
- * Hands-on experience developing PySpark jobs for large-scale data processing
- * Understanding of distributed data processing concepts
AWS Data Services
- * AWS Glue (primary ETL tool - required)
- * Experience working with S3-based data lakes
- * Familiarity with IAM, job scheduling, and monitoring in AWS
Data Engineering Fundamentals
- * ETL/ELT design patterns, Data Modelling Techniques.
- * Data quality checks and basic validation frameworks
- * Understanding of partitioning, schema evolution, and performance optimization
Optional Skillsets:
- * Experience with Redshift or any other MPP database.
- * Knowledge of Iceberg, or other open table formats
- * Exposure to CI/CD for data pipelines.
- * Experience supporting event-based design and systems preferably Kafka.
- * Familiarity with data governance, lineage, or cataloging tools
Location: Malvern-PA
qualifications:
Role: Data Engineer (Intermediate Level, Preferred Exp# 8+ Years)
Role Summary: The contractor will support the Middle Office Digital Activity team by building, enhancing, and maintaining scalable data products and pipelines on AWS. The role focuses on transforming and delivering reliable datasets using SQL, Python, and PySpark, with AWS Glue as the primary ETL tool.
Required Skillset:
SQL
* Robust proficiency in writing complex, performant queries
* Experience with analytical datasets and data transformations
Python
* Solid experience building data pipelines and reusable modules
* Familiarity with data processing libraries and structured code practices
PySpark
* Hands-on experience developing PySpark jobs for large-scale data processing
* Understanding of distributed data processing concepts
AWS Data Services
* AWS Glue (primary ETL tool - required)
* Experience working with S3-based data lakes
* Familiarity with IAM, job scheduling, and monitoring in AWS
Data Engineering Fundamentals
* ETL/ELT design patterns, Data Modelling Techniques.
* Data quality checks and basic validation frameworks
* Understanding of partitioning, schema evolution, and performance optimization
Optional Skillsets:
* Experience with Redshift or any other MPP database.
* Knowledge of Iceberg, or other open table formats
* Exposure to CI/CD for data pipelines.
* Experience supporting event-based design and systems preferably Kafka.
* Familiarity with data governance, lineage, or cataloging tools
Location: Malvern-PA
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact
Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including: medical, prescription, dental, vision, AD&D, and life insurance offerings, short-term disability, and a 401K plan (all benefits are based on eligibility).
This posting is open for thirty (30) days.
Any consideration of a background check would be an individualized assessment based on the applicant or employee's specific record and the duties and requirements of the specific job.
![]()