AWS Senior Tech Lead Data Engineer

Overview

Remote
$70+
Contract - W2
Contract - 12 Month(s)

Skills

python
pandas
pyspark
Terraform
AWS Glue
Lambda
S3
Redshift
EMR

Job Details

  • Design and implement scalable data pipelines using AWS services such as Glue, Redshift, S3, Lambda, EMR, Athena
  • Develop and maintain ELT processes to transform and integrate data from various sources.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
  • Optimize and tune performance of data pipelines and queries.
  • Ensure data quality and integrity through robust testing and validation processes.
  • Implement data security and compliance best practices.
  • Monitor and troubleshoot data pipeline issues and ensure timely resolution.
  • Stay updated with the latest developments in AWS data engineering technologies and best practices.

Required Skills and Qualifications:

  • Bachelor s or Master s degree in Computer Science, Information Technology, or a related field.
  • 5+ years of experience in data engineering with a focus on AWS technologies.
  • Expertise in AWS services such as Glue, Redshift, S3, Lambda, EMR, Athena,
  • Strong programming skills in Python, Pandas, SQL
  • Experience with database systems such as AWS RDS, Postgres and SAP HANA.
  • Knowledge of data modeling, ETL processes, and data warehousing concepts.
  • Familiarity with CI/CD pipelines and version control systems (e.g., Git).
  • Experience writing infrastructure as code using Terraform.
  • Familiarity with Glue Notebooks, Sagemaker Notebooks, Textract, Rekognition, Bedrock, and any GenAI/LLM tools
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice to Have:

  • AWS Certification (e.g., AWS Certified Data Analytics, AWS Certified Solutions Architect).
  • Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
  • Knowledge of AWS SageMaker and its integration within data pipelines.
  • Knowledge of big data technologies such as Apache Spark, Hadoop, or Kafka.
  • Experience with data visualization tools like Tableau, Power BI, or AWS QuickSight.
  • Familiarity with Azure DevOps and Azure Pipelines.
  • Familiarity with Data Catalog and Governance tools such as AWS DQ, Collibra, and profiling tools such as AWS Databrew
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.