AWS Data Engineer

Overview

Remote
Depends on Experience
Contract - W2
Contract - 3 Month(s)

Skills

AWS
Linux
ETL Process
Data Pipelines
Python
PySpark
Lambda
Airflow
Snowflake
AWS Glue
EMR
Workflows
CI/CD
Iac
Devops Automation
S3
Step Functinos
SQL
Scripting

Job Details

One of our direct clients looking for AWS Data Engineer and this is REMOTE work. Candidate has to travel NOLA location whenever required.

If you are currently looking for new opportunity, review the below job description and if you feel comfortable share your updated resume along with your details asap.

AWS Data Engineer
Remote With Travel to NOLA

Contract at least through End of year.
Sr. Level with Excellent Communication
Support Supply Chain Organization

Position Overview:

We are looking for a Sr. AWS Data Engineer to lead the migration of existing Linux-based ETL processes into modern, scalable AWS data pipelines. The role requires deep expertise in Python, PySpark, Lambda, Airflow, and Snowflake to re-architect legacy workloads into cloud-native solutions.

Key Responsibilities:

  • Lead the migration of Linux-based ETL jobs to AWS-native pipelines, ensuring performance, scalability, and cost-efficiency.
  • Design, build, and optimize ETL/ELT workflows using AWS Glue, EMR, Lambda, Step Functions, and Airflow.
  • Develop distributed data processing solutions using PySpark for large-scale transformations.
  • Integrate and optimize pipelines for Snowflake as the primary data warehouse.
  • Ensure robust data quality, monitoring, and observability across all pipelines.
  • Partner with data architects, business analysts, and stakeholders to align migration strategies with business needs.
  • Establish best practices for CI/CD, infrastructure as code (IaC), and DevOps automation in data engineering workflows.
  • Troubleshoot performance bottlenecks and optimize processing costs on AWS.

Required Skills & Qualifications:

  • 8+ years of experience in data engineering, with at least 3 years in AWS cloud environments.
  • Strong background in Linux-based ETL frameworks and their migration to cloud-native pipelines.
  • Expertise in Python, PySpark, SQL, and scripting for ETL/ELT processes.
  • Hands-on experience with AWS Glue, Lambda, EMR, S3, Step Functions, and Airflow.
  • Strong knowledge of Snowflake data warehouse integration and optimization.
  • Proven ability to handle large-scale, complex data processing and transformation pipelines.
  • Familiarity with data governance, security, and compliance best practices in AWS.

Preferred Qualifications:

  • Experience with Terraform or CloudFormation for infrastructure automation.
  • Familiarity with real-time data streaming (Kafka, Kinesis).
  • Exposure to machine learning pipelines on AWS.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.