Overview
On Site
$60 - $70
Contract - W2
Contract - 6 Month(s)
No Travel Required
Skills
AWS
Glue
Redshift
S3
Job Details
Role: AWS Engineer
Location: Seattle, WA / Austin, TX (4 days onsite)
Duration: 6+ months
Summary:
We are seeking a highly skilled AWS Engineer to join a cutting-edge data platform. The ideal candidate will have deep experience in AWS infrastructure, data lake architecture, and large-scale data pipeline development. This role demands hands-on expertise in AWS services such as Glue, EMR, Redshift, S3, and SageMaker, along with strong SQL, Python, and PySpark skills.
Key Responsibilities:
- Architect, develop, and maintain scalable AWS-based data lake and ETL/ELT solutions.
- Leverage AWS Glue, EMR, CloudFormation, Development EndPoints, S3, Redshift, and EC2 to build distributed and secure data platforms.
- Set up and optimize Jupyter/SageMaker Notebooks for advanced analytics and data science collaboration.
- Develop robust data pipelines using Spark clusters, ensuring performance, fault-tolerance, and maintainability.
- Build connectors to ingest and process data from distributed sources using various integration tools and frameworks.
- Write efficient, production-grade SQL, Python, and PySpark code for data transformation and analysis.
- Lead proof-of-concept (PoC) efforts and scale them into production-ready systems.
- Stay current with emerging data and cloud technologies, offering guidance on how to apply them effectively to solve complex technical and business challenges.
- Collaborate with cross-functional teams, including data scientists, analysts, and product stakeholders.
Required Skills:
- Proven experience setting up and managing AWS infrastructure with CloudFormation, Glue, EMR, Redshift, S3, EC2, and SageMaker.
- Strong knowledge of Data Lake architecture and data ingestion frameworks.
- 5+ years of experience in Data Engineering and Data Warehouse development.
- Advanced proficiency in SQL, Python, and PySpark.
- Experience designing and optimizing complex Spark-based data pipelines on AWS.
- Ability to troubleshoot performance bottlenecks and production issues in large-scale distributed systems.
- Strong leadership in taking PoCs to production through structured engineering practices.
Preferred Qualifications:
- AWS certifications (e.g., AWS Certified Data Analytics Specialty, Solutions Architect).
- Prior experience at an enterprise-scale client such as Amazon or other FAANG companies.
- Familiarity with DevOps practices and tools like Terraform, Jenkins, Docker, and Git.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.