AWS Data Engineer

Overview

On Site
Depends on Experience
Full Time

Skills

AWS
ETL
Python
SQL

Job Details

Job Description: AWS Data Engineer

Location: Reston , Virginia (F2F Interview)

Job Description

We are looking for an AWS Data Engineer with strong experience in building and

managing data pipelines and cloud-based data platforms. The role involves working

with AWS services, ETL processes, and big data technologies to deliver reliable,

scalable, and secure data solutions.

Responsibilities

Build and maintain ETL pipelines and data integration workflows on AWS.

Design and optimize data lakes and data warehouses (Redshift, S3, Glue, Athena,

EMR).

Develop batch and real-time data processing solutions using tools such as Kinesis,

Kafka, or Spark.

Work with SQL/NoSQL databases for data storage and analytics.

Collaborate with data scientists, analysts, and business teams to deliver data

solutions.

Implement data quality checks, monitoring, and security best practices.

Support deployment, automation, and performance tuning of data pipelines.

Required Skills

Strong experience with AWS services: Redshift, S3, Glue, EMR, Athena, Lambda,

Kinesis, DynamoDB.

Proficiency in Python, PySpark, or Scala for data processing.

Strong knowledge of SQL and NoSQL databases.

Hands-on experience with ETL tools, data modeling, and pipeline orchestration

(Airflow, Step Functions, etc.).

Good understanding of data warehousing, big data frameworks, and distributed

systems.

Preferred Skills

AWS certifications (Data Analytics, Solutions Architect, etc.).

Experience with DevOps tools (Docker, Kubernetes, Terraform, CI/CD).

Knowledge of machine learning data pipelines or advanced analytics.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.