Responsibilities for Data Engineer
Must have strong Oracle experience. Experience in loading data from Oracle to Snowflake and Redshift using Python and Airflow
Expertise in developing complex Airflow data jobs. Airflow code deployment on production (AWS managed Airflow)
Create and maintain a data pipeline for automating data ingestion from multiple data sources
Build the data platform capabilities required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
Work with data and analytics experts to strive for greater functionality in our data systems.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement a solution for automating manual processes and optimizing data delivery
Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Qualifications for Data Engineer
Experience with Data Pipeline and ETL Tools such as Apache Airflow, AWS Data Pipeline, AWS Glue, Talend
Experience with Data Warehouse solution - Redshift, Snowflake
Experience with AWS cloud services: S3, Athena, RDS, EC2, EMR, RDS, Lambda,
Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc
Experience with big data tools: Hadoop, Spark, Kafka, etc.
Experience with relational SQL and NoSQL databases, including Postgres and DynamoDB.
Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Experience with Kubernetes, EKS, API Development
Advanced working SQL knowledge and experience working with relational databases and NOSQL databases.
Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Experience performing root cause analysis on internal and external data ingestion.
Strong analytic skills related to working with Structured, Semi-Structured and Unstructured datasets.
A successful history of manipulating, processing and extracting value from large disconnected datasets.
Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
Experience supporting and working with cross-functional teams in a dynamic environment.