Lead Data Engineer (Snowflake, Airflow, Python, PySpark, S3, EMR, Redshift) *** Direct end client *** remote during covid

API, Amazon DynamoDB, Amazon EC2, Amazon RDS, Amazon Redshift, Amazon S3, Amazon Web Services, Analytics, Apache Airflow, Apache Hadoop, Apache Kafka, Apache Spark, Automation, Big data, Business requirements, Cloud, Cross-functional, Data centers, Data engineering, Data warehouse, Database, EMR, ETL, Extraction, Infrastructure, Java, Kubernetes, NoSQL, OOP, PostgreSQL, Python, RDBMS, Root cause analysis, SQL, Scala, Scripting, Snow flake schema, Talend, Transformation, Workflow management, data engineer, lead data engineer, senior data engineer, AWS, S3, Redshift, Spark
Contract W2, Contract Independent, Contract Corp-To-Corp, 12 Months
Depends on Experience

Job Description

Responsibilities for Data Engineer

       Create and maintain a data pipeline for automating data ingestion from multiple data sources

       Build the data platform capabilities required for optimal extraction, transformation, and loading of data from a wide variety of data sources.

       Keep our data separated and secure across national boundaries through multiple data centers and AWS regions

       Work with data and analytics experts to strive for greater functionality in our data systems.

       Assemble large, complex data sets that meet functional / non-functional business requirements.

       Identify, design, and implement a solution for automating manual processes and optimizing data delivery

       Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Qualifications for Data Engineer

       Experience with Data Pipeline and ETL Tools such as Apache Airflow, AWS Data Pipeline, AWS Glue, Talend

       Experience with Data Warehouse solution - Redshift, Snowflake

       Experience with AWS cloud services: S3, Athena, RDS, EC2, EMR, RDS, Lambda,

       Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc

       Experience with big data tools: Hadoop, Spark, Kafka, etc.

       Experience with relational SQL and NoSQL databases, including Postgres and DynamoDB.

       Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.

       Experience with Kubernetes, EKS, API Development

       Advanced working SQL knowledge and experience working with relational databases and NOSQL databases.

       Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.

       Experience performing root cause analysis on internal and external data ingestion.

       Strong analytic skills related to working with Structured, Semi-Structured and Unstructured datasets.

       A successful history of manipulating, processing and extracting value from large disconnected datasets.

       Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.

       Experience supporting and working with cross-functional teams in a dynamic environment. 

Dice Id : 10126850
Position Id : LEAD-DATA-ENG
Originally Posted : 2 months ago
Have a Job? Post it