AWS Data Engineer with strong experience in AWS Glue, Python, PySpark, and SQL Server - Charlotte NC - Hybrid work - weekly 3 days onsite

Overview

Hybrid
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)

Skills

AWS Data engineer
Python
Pyspark
AWS Glue
SQL Server

Job Details

Job Title: AWS Data Engineer

Location: Charlotte NC
Employment Type: Contract

Hybrid work : Weekly 3 days onsite

About the Role

We are seeking a skilled AWS Data Engineer with strong experience in AWS Glue, Python, PySpark, and SQL Server to design, build, and maintain scalable data pipelines and ETL solutions in the AWS cloud environment. The ideal candidate will work closely with data analysts, data scientists, and business stakeholders to ensure seamless data integration, transformation, and delivery for analytics and reporting.

Key Responsibilities

  • Design, develop, and deploy ETL pipelines using AWS Glue, AWS Lambda, Step Functions, and PySpark.
  • Work with SQL Server and other data sources to extract, transform, and load (ETL) data into AWS-based data lakes and data warehouses (e.g., Amazon S3, Redshift).
  • Write optimized PySpark and Python scripts for large-scale data processing and transformations.
  • Implement data validation, quality checks, and monitoring to ensure data integrity and reliability.
  • Collaborate with data architects and analysts to define data models and metadata standards.
  • Manage and optimize AWS resources (Glue jobs, Glue Crawlers, Athena, EMR, S3, IAM, CloudWatch).
  • Troubleshoot data issues, performance bottlenecks, and job failures in Glue and PySpark.
  • Maintain and document ETL workflows, data flow diagrams, and technical design documents.

Required Skills & Qualifications

  • Bachelor s degree in Computer Science, Information Systems, Data Engineering, or a related field.
  • 9+ years of experience in data engineering or ETL development.
  • Hands-on experience with AWS Glue (ETL jobs, Crawlers, Catalogs, Triggers).
  • Strong proficiency in Python and PySpark for data transformation and automation.
  • Solid knowledge of SQL Server, including T-SQL, stored procedures, performance tuning, and data modeling.
  • Experience with AWS services such as S3, Lambda, Athena, Redshift, Step Functions, and CloudWatch.
  • Familiarity with data lake and data warehouse architectures.
  • Strong problem-solving, debugging, and performance optimization skills.

Preferred Qualifications

  • AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect.
  • Experience with CI/CD pipelines (CodePipeline, Jenkins, GitHub Actions).
  • Knowledge of data governance, security, and compliance best practices.
  • Experience integrating with APIs and third-party data sources.
  • Familiarity with infrastructure-as-code tools (CloudFormation, Terraform).

Soft Skills

  • Excellent communication and documentation skills.
  • Strong analytical thinking and attention to detail.
  • Ability to work independently and collaboratively in a team-oriented environment.

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Optimus Labs USA