Overview
Skills
Job Details
Position: AWS Data Engineer
Location: Toronto, Canada Onsite/Hybrid
Duration: Long Term
Job Responsibilities:
AWS Data Engineer
Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and QS/SNS/Cloudwatch/Step function/CDK(or Terrafoam.
Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes.
Create and manage applications using Python, Pyspark, SQL, Databricks, and various AWS technologies.
Automate repetitive tasks and build reusable frameworks to improve efficiency.
Involved in Arch Design and Review, Team Leadership Skills, Management Skills, Involved in Code Review & Sign-Off
Primary Skills:
SQL - Expert (Must have)
AWS (Redshift/Lambda/Glue/SQS/SNS/Cloudwatch/Step function/CDK (or Terrafoam)) - Expert (Must have)
Pyspark -Intermediate/Expert
Python - Intermediate (Must have or Pyspark knowledge)
Code & Architecture Review- Ensuring quality and scalability of code/data pipelines
Team Leadership Skills - Leading sprint planning, retrospectives and Assigning and reviewing tasks
Project Management Skills - Agile/Scrum methodology, Risk & deadline management
Mentoring & Coaching - Upskilling team on AWS best practices and data architecture
Stakeholder Communication - Translating tech into business value. Reporting to non-technical managers
Exposure to cost optimization strategies in AWS, Data Lake Vs. Data Warehouse Architecture,
Designing Scalable ETL Pipelines, Real-Time Data Ingestion, Data Partitioning and Performance Optimization, Security and Compliance, Data Quality
Involved in Code Review & Sign-Off
Secondary Skills:
AWS Airflow - Intermediate (Nice of have)
ETL Process