Data Engineer

Overview

On Site
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)

Skills

big data solution
SNS/SQS
Spark
Scala
AWS Glue
Lambda
ETL/ELT process
Python

Job Details

Job Description: An AWS data engineer job description typically involves designing, building, and maintaining scalable data pipelines, architectures, and solutions on the Amazon Web Services (AWS) cloud platform. Key responsibilities include data integration, building ETL processes using services like AWS Glue and Redshift, data modeling, and ensuring data quality and security. This role often requires proficiency in programming languages like Python and skills with other technologies such as SQL, Apache Spark, and serverless architectures.
Key responsibilities:
AWS data engineers are responsible for designing and building data pipelines and developing ETL/ELT processes using tools such as AWS Glue, EMR, and Redshift to prepare data for analytics. They integrate data from various sources and create and maintain data models efficient for storage and analysis. Ensuring data quality, security, and compliance through implementing checks, validation processes, and security best practices is also crucial. The role involves monitoring and optimizing data processing jobs and databases for performance, collaborating with stakeholders to understand data requirements, and maintaining and operationalizing existing data solutions.
Minimum Skills Required: At least 6+ years of relevant experience in design, development, complete end-end design of enterprise-wide big data solution.
Experience in designing & developing a big data solution using Spark, Scala, AWS Glue, Lambda, SNS/SQS, Cloudwatch is a must.
Strong Application development experience in Scala/Python.
Strong Database SQL experience, preferably Redshift.
Experience in Snowflake is an added advantage.
Experience with ETL/ELT process and frameworks is a must.
Strong background in AWS cloud services like lambda, glue, s3, emr, sns, sqs, cloudwatch, redshift
Expertise in SQL and experience with relational databases like Oracle, MySql, PostgreSQL

Proficient in Python programming for data engineering tasks, automations
Experience with shell scripting in Linux/Unix environments.
Experience with Big Data Hadoop, Spark
Financial Services experience required
Nice to have - knowledge in Machine Learning models, regression, validation

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.