Sr. AWS Data Engineer

Overview

Remote
Depends on Experience
Full Time

Skills

Amazon Kinesis
Amazon S3
Amazon Web Services
Apache Kafka
Cloud Computing
Continuous Delivery
Continuous Integration
Data Engineering
Data Governance
Data Processing
Data Quality
Data Warehouse
DevOps
Electronic Health Record (EHR)
Extract
Transform
Load
Linux
Machine Learning (ML)
Migration
Optimization
PySpark
Python
Real-time
Regulatory Compliance
SQL
Scalability
Scripting
Snow Flake Schema
Step-Functions
Streaming
Terraform
Workflow
ELT

Job Details

Hi

Greetings!!
Please find the Job Description mentioned below. Please reply back if you are interested in this position.
Role: Sr. AWS Data Engineer

Location: Remote(CST/EST)

Duration: Contract

USC

15+ Exp

Position Overview:

We are looking for a Sr. AWS Data Engineer to lead the migration of existing Linux-based ETL processes into modern, scalable AWS data pipelines. The role requires deep expertise in Python, PySpark, Lambda, Airflow, and Snowflake to re-architect legacy workloads into cloud-native solutions.

Key Responsibilities:

Lead the migration of Linux-based ETL jobs to AWS-native pipelines, ensuring performance, scalability, and cost-efficiency.

Design, build, and optimize ETL/ELT workflows using AWS Glue, EMR, Lambda, Step Functions, and Airflow.

Develop distributed data processing solutions using PySpark for large-scale transformations.

Integrate and optimize pipelines for Snowflake as the primary data warehouse.

Ensure robust data quality, monitoring, and observability across all pipelines.

Partner with data architects, business analysts, and stakeholders to align migration strategies with business needs.

Establish best practices for CI/CD, infrastructure as code (IaC), and DevOps automation in data engineering workflows.

Troubleshoot performance bottlenecks and optimize processing costs on AWS.

Required Skills & Qualifications:

8+ years of experience in data engineering, with at least 3 years in AWS cloud environments.

Strong background in Linux-based ETL frameworks and their migration to cloud-native pipelines.

Expertise in Python, PySpark, SQL, and scripting for ETL/ELT processes.

Hands-on experience with AWS Glue, Lambda, EMR, S3, Step Functions, and Airflow.

Strong knowledge of Snowflake data warehouse integration and optimization.

Proven ability to handle large-scale, complex data processing and transformation pipelines.

Familiarity with data governance, security, and compliance best practices in AWS.

Preferred Qualifications:

Experience with Terraform or CloudFormation for infrastructure automation.

Familiarity with real-time data streaming (Kafka, Kinesis).

Exposure to machine learning pipelines on AWS.

Thanks and regards,
Mohan Sundar
M2S Tech Solutions
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.