Overview
On Site
Hybrid
$60 - $80
Contract - W2
Contract - 24 Month(s)
Able to Provide Sponsorship
Skills
Snowflake
AWS Cloud
Python
data lakes
DBT
Job Details
Job Description:
We are seeking a Senior Data Engineer with deep expertise in Snowflake, AWS Cloud, and Python to architect and implement large-scale, high-performance data solutions. The ideal candidate will be responsible for building robust data pipelines, optimizing Snowflake environments, integrating diverse data sources, and enabling advanced analytics. This role requires someone who can work closely with data scientists, analysts, and business teams to ensure data reliability, scalability, and performance.
Key Responsibilities:
- End-to-End Data Pipeline Development: Architect, build, and maintain scalable ETL/ELT pipelines using Airflow and Python, processing data across AWS (S3, Redshift, Lambda, Glue, etc.).
- Snowflake Architecture & Optimization:
- Design and manage Snowflake warehouses, databases, and schemas.
- Implement performance tuning, clustering, and query optimization for cost efficiency and speed.
- Develop and optimize data sharing and partitioning strategies.
- Data Modeling & Transformation:
- Build and maintain data lakes and data marts to support BI, analytics, and machine learning workloads.
- Leverage dbt (data build tool) or equivalent for modular transformations (if applicable).
- Data Integration:
- Integrate structured and unstructured data from on-premise and cloud sources into Snowflake.
- Work with streaming and batch ingestion frameworks.
- Automation & Infrastructure:
- Automate data workflows using Airflow DAGs, Python scripts, and AWS Lambda functions.
- Implement CI/CD practices for data engineering pipelines.
- Collaborate with cross-functional teams to support analytics, AI/ML models, and reporting solutions.
Required Skills & Qualifications:
- 5+ years of experience as a Data Engineer.
- Expert-level knowledge of Snowflake, including query optimization, performance tuning, security, and architecture best practices.
- Strong experience with AWS cloud services:
- S3, Glue, Redshift, Lambda, CloudWatch, and IAM policies.
- Experience in building data lakes and ETL pipelines on AWS.
- Advanced Python programming skills for automation, data processing, and pipeline orchestration.
- Hands-on experience with Airflow (or similar orchestration tools).
- Strong SQL skills and familiarity with data warehousing concepts.
- Experience in CI/CD practices and DevOps for Data Engineering (using Git, Jenkins, or equivalent).
- Excellent communication skills and the ability to work with business stakeholders.
Nice-to-Have:
- Experience with dbt (Data Build Tool) or other transformation frameworks.
- Knowledge of real-time data processing (Kafka, Kinesis).
- Familiarity with containerized workloads (Docker, Kubernetes)
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.