Seattle, Washington
•
Today
AWS Data/BI Engineer ONSITE SEATTLE, WA PERMANENT RESIDENT & AND CITIZENS ONLY (NO H1's PLEASE) AWS Python Redshift Glue Athena Lambda Thanks & Regards, Srikrishna P Phone: Email:
Easy Apply
Contract
Depends on Experience
⏳ Almost there, hang tight...
Join a leading global consulting team as an AWS Data Engineer focused on high-level data integration. You will serve as a bridge between business requirements and technical execution, specifically designing pathways for SAP to communicate with external applications via AWS technologies. This is a critical role for a consultant who enjoys solving complex data warehousing challenges in a Pacific Time Zone-aligned environment.
Must be able to work without sponsorship
5+ years of experience with Python, SQL, and Data Modeling.
Proven expertise in AWS Stack, specifically Redshift and S3.
Experience building ETL pipelines and working with Data Lakes.
Local to Seattle, WA
Equal Opportunity Employer: Race, Color, Religion, Sex, Sexual Orientation, Gender Identity, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status.
At Randstad Digital, we welcome people of all abilities and want to ensure that our hiring and interview process meets the needs of all applicants. If you require a reasonable accommodation to make your application or interview experience a great one, please contact
Pay offered to a successful candidate will be based on several factors including the candidate's education, work experience, work location, specific job duties, certifications, etc. In addition, Randstad Digital offers a comprehensive benefits package, including: medical, prescription, dental, vision, AD&D, and life insurance offerings, short-term disability, and a 401K plan (all benefits are based on eligibility).
This posting is open for thirty (30) days.
No location provided
•
Today
Role:- Senior AWS Data Engineer Location: Remote Duration: Long Term Contract Looking for W2 Candidates. No C2C Must Have: PySpark ramp-up Glue job hands-on proof Dimensional modeling Core Responsibilities: Develop and maintain PySpark-based ETL pipelines for batch and incremental data processing Build and operate AWS Glue Spark jobs (batch and event-driven), including: Job configuration, scaling, retries, and cost optimization Glue Catalog and schema management Design and maintain event-d
Contract
$DOE