Overview
Skills
Job Details
Looking to make a real impact with your cloud data engineering skills? Join a cutting-edge project modernizing a massive 28TB data warehouse for a federal client remote flexibility included.
Contract
Hybrid or Remote-VA
$60-72/hr
Data engineering, AWS migrations, converting SQL queries to Redshift, data governance, data validation, ETL pipelines, data warehouse, Python/Pyspark
We are seeking an AWS Data Migration Specialist to support a major data migration and transformation initiative. The ideal candidate will have extensive experience in migrating on-premise SQL Server data warehouses to AWS Redshift and leading enterprise-scale ETL/data quality projects.
You will be working with a 28 TB data warehouse, helping modernize infrastructure, validate data integrity, and ensure high performance across cloud data systems.
Key Responsibilities:Design, implement, and optimize ETL pipelines for large-scale data ingestion, transformation, and loading into AWS Redshift.
Migrate and convert complex SQL Server queries to Redshift-compatible SQL.
Perform data validation, cleansing, and reconciliation to ensure data quality and governance standards.
Collaborate with cross-functional teams to gather requirements and translate them into scalable data solutions.
Support performance tuning, troubleshooting, and optimization in Redshift and related AWS services.
Develop and maintain technical documentation for data processes, pipelines, and troubleshooting guides.
Enforce and enhance data governance practices across the environment.
7+ years of experience in data engineering or a related role.
Proven experience with SQL Server to Redshift migration projects.
Hands-on expertise in AWS Redshift, SQL, ETL tools (e.g., AWS Glue, Apache Airflow), and Python/PySpark.
Deep understanding of data modeling, query optimization, and performance tuning in large datasets.
Strong knowledge of data warehousing concepts (e.g., star/snowflake schemas).
Experience with data quality, validation, and data governance frameworks.
Familiarity with data tools like Apache Kafka, Fivetran, or similar.
Ability to troubleshoot complex data issues and optimize large-scale pipelines.