Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - W2
Contract - 12 Month(s)
Skills
Data Loading
Data Processing
Business Intelligence
ELT
Snow Flake Schema
Tableau
Data Integration
Extract
Transform
Load
API
OAuth
Query Optimization
Development Testing
Health Care
MongoDB
Jupyter
Extract
Transform
Load
Amazon S3
Anaconda
Job Details
- Design data pipelines for API, streaming, and batch processing to facilitate data loads into the Snowflake data warehouse.
- Collaborate with other engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions.
- Develop scripts to Extract, Load and Transform data and other utility functions
- Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases
- Build necessary components to ensure data quality, monitoring, alerting, integrity, and governance standards are maintained in data processing workflows
- Able to navigate ambiguity and thrives in a fast-paced environment. Takes initiative and consistently delivers results with minimal supervision.
- 7+ years of experience in building and maintaining data pipelines and ETL/ELT processes in data-centric organizations
- Strong coding skills using Python. Familiar wi.th Python libraries related to data engineering and cloud services including pandas, boto, etc
- At least 3 years of experience with AWS S3, SQS, Kinesis, Lambda, AWS DMS, Glue/EMR, AWS Batch or similar services.
- Hands-on experience building streaming and batch big data data pipelines
- Must have knowledge of building Infrastructure in AWS cloud using Cloud formation or Terraform
- Experience on Anakonda and Jupyter Notebook
- 3+ years of working experience with Snowflake cloud data warehouse including Snowflake data shares, Snowpipes, Snow SQL, Tasks etc
- Must have working knowledge of various databases, SQL and NoSQL
- Must have working knowledge of various file formats like CSV, Json Avro, and Parquet.
- Hands-on experience with cloud platforms such as AWS and Google Cloud.
- Experience working with agile development methodology.
- Experienced in CI/CD and release processes, proficient in Git or other source control management systems, to streamline development and deployment workflows
Other Desired Skills:
- Minimum 5 years of designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions.
- Exposure to multi-tenant/multi-customer environments is a big plus.
- Hands on experience with productionized data ingestion and processing pipelines
- Strong understanding of Snowflake Internals and integration of Snowflake with other data processing and reporting technologies.
- Experience working with structured, semi-structured, and unstructured data.
- Familiarity with MongoDB or similar NoSQL database systems.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.