About the Role
We are seeking a highly skilled Data Engineer with extensive experience in Fivetran to
design, implement, and maintain robust data pipelines from the ground up. This role is
critical to ensuring seamless data integration and enabling scalable analytics across the
organization.
Key Responsibilities
Design and Build Data Pipelines: Develop and configure end-to-end data
pipelines using Fivetran, ensuring efficient extraction, transformation, and loading
(ETL) processes.
Integration Setup: Connect multiple data sources (APIs, SaaS platforms,
databases) to Fivetran and manage schema mapping and transformations.
Data Modeling: Collaborate with analytics and BI teams to design optimized
data models for reporting and analysis.
Performance Optimization: Monitor pipeline performance, troubleshoot issues,
and implement improvements for scalability and reliability.
Automation & Documentation: Automate workflows and maintain clear
documentation for all data processes.
Security & Compliance: Ensure data integrity, security, and compliance with
organizational and regulatory standards.
Required Qualifications
Technical Expertise:
o Proven experience with Fivetran (pipeline setup, connectors,
transformations).
o Strong knowledge of SQL and relational databases.
o Familiarity with cloud data warehouses (Snowflake, BigQuery,
Redshift).
Data Engineering Skills:
o Experience building ETL/ELT pipelines from scratch.
o Understanding of data modeling and schema design.
Additional Skills:
o Proficiency in Python or other scripting languages for data manipulation.
o Knowledge of API integrations and RESTful services.
Preferred Qualifications
Experience with dbt for transformations.
Background in data governance and observability tools.
Exposure to modern data stack technologies.
Soft Skills
Strong problem-solving and analytical skills.
Excellent communication and collaboration abilities.
Ability to work in a fast-paced, dynamic environment.