#W2 Requirement
Job Title: Snowflake Data Engineer
Location: Indianapolis, IN(5-Days Onsite)
Duration: 12+ Months contract with possible extension
Employment: W2
Job Description:
The Data Engineer is leading the end-to-end technical design and implementation of modern, AI enabled data solutions and independently drives architecture decisions, builds and optimizes workflows, and maintains production systems that enable data driven decisions across the enterprise; while partnering with the platform team to influence the technology stack and deliver enhancements, they operate with ownership to set standards and automate best practices.
This position reports to the AI & Data Infrastructure Lead and is part of the Digital Products Development Group. The role can be performed remotely from Poland or India.
In this role, you will have the opportunity to:
Lead an execution of complex data solutions for high-impact projects, driving technical decisions and ensuring best practices across critical initiatives.
Design and implement data pipelines and workflows using modern tools and frameworks, including Snowflake, Apache Airflow, DBT, Matillion, etc.
Build and maintain cloud-native data solutions leveraging Azure and AWS services to ensure scalability and reliability.
Contribute to platform reliability, cost efficiency, and security by actively engaging in design discussions, executing proof-of-concepts (POCs), automating processes, and mentoring team members to ensure adherence to best practices.
Collaborate with data architects, analysts, data scientists, ML engineers and business stakeholders to deliver high-quality, governed data and ML solutions that support data driven decisions.
The essential requirements of the job include:
Collaboration Strong communication skills to work effectively with data architects, analysts, and cross-functional teams.
Problem Solving Ability to troubleshoot complex data pipeline issues and optimize workflows for scalability and reliability.
Cloud Expertise Hands-on experience with major cloud platforms (Azure preferred; AWS or Google Cloud Platform is a plus), including data services like Snowflake.
Data Modeling & Optimization strong knowledge of one of RDBMS like Snowflake or similar, expertise in designing efficient schemas and optimizing queries for performance and cost.
Data Pipeline Orchestration proficiency with workflow tools such as Apache Airflow for scheduling, monitoring, and managing ETL processes.
Data Transformation Frameworks Strong knowledge of one of data processing tools like DBT and other transformation tools for version-controlled, modular data modeling.
ETL/ELT Tools Experience with tools like Matillion for building scalable data integration pipelines.
It would be a plus if you also possess previous experience in:
Visualization experience in building reports and dashboards using tools like PowerBI or similar.
Knowledge of Data Science and AI concepts, with experience supporting ML workflows through feature datasets and containerized deployments using Docker, Fargate, or similar tools.