Overview
Skills
Job Details
We're looking for a sharp and experienced Senior Data Engineer / ETL Expert to join our team. You'll be responsible for designing, building, and optimizing scalable data pipelines using our core stack: AWS, PySpark, Python, and modern ETL frameworks. This role is critical to our data infrastructure, ensuring reliable data flow and accessibility across the organization.
location: Charlotte, North Carolina
job type: Contract
salary: $65 - 70 per hour
work hours: 8am to 5pm
education: Bachelors
responsibilities:
- Architect, build, and maintain robust ETL pipelines using PySpark and Python
- Design scalable data processing solutions on AWS (e.g., EMR, Glue, S3, Lambda)
- Collaborate with data scientists, analysts, and software engineers to ensure clean and usable data
- Hands-on experience in AWS services like Glue, RDS, S3, Step functions, Event Bridge, Lambda, MSK (Kafka), EKS etc.
- Hands-on experience in Databases like Postgres, SQL Server, Oracle, Sybase
- Hands-on experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, functions and triggers
- Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing).
- Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins
- Strong foundation and experience with data modeling, data warehousing, data mining, data analysis and data profiling.
- Optimize data workflows for performance and scalability
- Monitor and troubleshoot data pipeline performance and reliability
- Ensure data quality, integrity, and compliance with governance standards
- Document systems and processes for ongoing support and scalability
qualifications:
- 7+ years of experience in data engineering or related roles
- Strong expertise in ETL pipeline design and development
- Proficiency with PySpark and Python for data processing
- Deep experience with AWS data services (e.g., S3, Glue, Redshift, EMR)
- Solid understanding of distributed data systems and performance tuning
- Experience with data modeling and warehousing concepts
- Excellent problem-solving skills and attention to detail
- Strong communication and collaboration skills
skills:
- Strong expertise in ETL pipeline design and development
- Proficiency with PySpark and Python for data processing
- Deep experience with AWS data services (e.g., S3, Glue, Redshift, EMR)
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact
Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including: medical, prescription, dental, vision, AD&D, and life insurance offerings, short-term disability, and a 401K plan (all benefits are based on eligibility).
This posting is open for thirty (30) days.