Overview
Skills
Job Details
Responsibilities:
Architect and implement scalable data pipelinesusing dbt-core, Python, and SQL to support analytics, reporting, and data science initiatives.
Design and optimize data modelsin Snowflake to support efficient querying and storage.
Lead the development and maintenance of our data warehouse, ensuring data quality, governance, and performance.
Collaborate with cross-functional teamsincluding data analysts, data architects, data scientists, and business stakeholders to understand data needs and deliver robust solutions.
Establish and enforce best practicesfor version control (Git), CI/CD pipelines, and data pipeline monitoring.
Mentor and guide junior data engineers, fostering a culture of technical excellence and continuous improvement.
Evaluate and recommend new tools and technologies to enhance the data platform.
Provide on-going support for the existing ELT/ETL processes and procedures.
Other duties as assigned including but not limited to possible reallocation of efforts to other organizations per business need and management request.
Required Knowledge, Skills and Experience
Bachelor's degree in an IT related field or relevant work experience
8+ years documented experience with ETL Tools (e.g. Informatica, Data Stage, SSIS, etc.)
Expert-level proficiency inSQLandPythonfor data transformation and automation.
Experience withdbt-corefor data modeling and transformation.
Strong hands-on experience cloud platforms (Microsoft Azure) andcloud data platforms (Snowflake).
Proficiency withGitand collaborative development workflows. Familiarity with Microsoft VSCode or similar IDEs. Knowledge of Azure DevOps or Gitlab development operations and job scheduling tools.
Solid understanding ofmodern data warehousing architecture, dimensional modeling, ELT/ETL frameworks and data modeling techniques.
Excellent communication skills and the ability to translate complex technical concepts to non-technical stakeholders.
Proven expertise in designing and implementing batch and streaming data pipelines to support near real-time and large-scale data processing needs.
Detail Orientated
Excellent communication, customer service, and problem-solving skills
Demonstrates resilience and composure in high-pressure or challenging situations, maintaining a positive and solution-focused approach.
Continually striving to gather current knowledge and information relevant to business needs to achieve results
Develop, design, or create new applications, ideas, relationships, systems, or products, including artistic contributions
Collaborates effectively by actively exchanging and building on ideas to develop innovative and practical solutions.
Able to work independently and maintain a positive attitude
Able to initiate action and build relationships within both the IT department and the business
Engages in collaborative dialogue, actively contributing and building on others ideas to co-create effective and innovative solutions.
Motivate and mentor team members to continue developing their skills
Documented experience with pertinent ETL tools (e.g. Informatica, Data Stage, SSIS, etc.)
Preferred Qualifications
Advanced degree in IT
Experience working in a cloud-native environment (AWS, Azure, or Google Cloud Platform).
Familiarity with data governance, security, and compliance standards.
Prior experience with Apache Kafka (Confluent).
Artificial Intelligence (AI) experience is a plus.
Hands-on experience with orchestration tools (e.g., Airflow, Prefect) is a plus.