Overview
Skills
Job Details
Role: Data Engineer / Modeler
Location: Remote
Duration: Long Term
Own end-to-end data flow from source systems into a semantic data model in Snowflake. Analyze current lineage and workflows, automate robust ELT/ETL pipelines and implement monitoring to ensure accuracy, timeliness and reliability.
Key Responsibilities
· Map current-state lineage from sources through staging, transformation, and the semantic model in Snowflake; document dependencies, SLAs, and data contracts.
· Design and automate ELT/ETL pipelines(e.g., SQL/Python, dbt/Snowflake Tasks/Streams) to standardize ingest, transformations and loads.
· Implement data quality & observability(tests, thresholds, freshness, volume, schema change detection) with alerting and runbooks.
· Establish monitoring dashboards and on-call procedures; investigate and resolve incidents; drive root-cause analysis and prevention.
· Continuously optimize cost/performance (warehouse sizing, pruning, clustering, caching) and security (RBAC, masking, PII handling).
Qualifications:
· 4–7+ years in data engineering with Snowflake (warehouses, roles, Tasks/Streams, shares, performance tuning).
· Expert SQL and strong Python; experience with dbt or equivalent transformation frameworks.
· Solid understanding of semantic modeling(star/snowflake, data products, canonical layers) and data governance best practices.
· Proven ownership of production pipelines: SLAs, incident response, postmortems and stakeholder communication.
Success Metrics (KPIs)
· Pipeline SLA adherence & freshness (% on-time loads).
· Data quality pass rate (tests/failures MTTR/MTBF).
· Cost/performance improvements (query latency, $/TB).
· Documentation & lineage coverage (% assets with contracts & lineage).
· Reduction in incidents/regressions and faster time-to-recovery.