Overview
Skills
Job Details
Title: Senior Data Engineer
Location: West Hollywood, CA (MUST BE ONSITE - 5 days a week)
Position: Fulltime
The mode of interview will be initial video (1 rounds) and then a final interview onsite. This opportunity is onsite in Los Angeles. They are unable to sponsor or transfer a sponsorship at this time. This is a full-time opportunity, so the candidate must be willing to re-locate here to California (Los Angeles area), vs. a temporary contract re-location. If you have a candidate that is willing to move here, that would be accepted, however it is preferred that someone is local based on interview schedule and ability to start working if approved without having to move.
Senior Data Engineer Snowflake / ETL (Onsite)
Summary
Hiring for a Senior Data Engineer to serve as a core member of the Platform team. This is a high-impact role responsible for advancing our foundational data infrastructure.
Your primary mission will be to build key components of our Policy Journal - the central source of truth for all policy, commission, and client accounting data. You'll work closely with the Lead Data Engineer and business stakeholders to translate complex requirements into scalable data models and reliable pipelines that power analytics and operational decision-making for agents, managers, and leadership.
This role blends greenfield engineering, strategic modernization, and a strong focus on delivering trusted, high-quality data products.
Overview
Build the Policy Journal - Design and implement the master data architecture unifying policy, commission, and accounting data from sources like IVANS and Applied EPIC to create the platform's "gold record."
Ensure Data Reliability - Define and implement data quality checks, monitoring, and alerting to guarantee accuracy, consistency, and timeliness across pipelines - while contributing to best practices in governance.
Build the Analytics Foundation - Enhance and scale our analytics stack (Snowflake, dbt, Airflow), transforming raw data into clean, performant dimensional models for BI and operational insights.
Modernize Legacy ETL - Refactor our existing Java + SQL (PostgreSQL) ETL system - diagnose duplication and performance issues, rewrite critical components in Python, and migrate orchestration to Airflow.
Implement Data Quality Frameworks - Develop automated testing and validation frameworks aligned with our QA strategy to ensure accuracy, completeness, and integrity across pipelines.
Collaborate on Architecture & Design - Partner with product and business stakeholders to deeply understand requirements and design scalable, maintainable data solutions.
Ideal Experience
5+ years of experience building and operating production-grade data pipelines.
Expert-level proficiency in Python and SQL.
Hands-on experience with the modern data stack - Snowflake/Redshift, Airflow, dbt, etc.
Strong understanding of AWS data services (S3, Glue, Lambda, RDS).
Experience working with insurance or insurtech data (policies, commissions, claims, etc.).
Proven ability to design robust data models (e.g., dimensional modeling) for analytics.
Pragmatic problem-solver capable of analyzing and refactoring complex legacy systems (ability to read Java/Hibernate is a strong plus - but no new Java coding required).
Excellent communicator comfortable working with both technical and non-technical stakeholders.
Huge Plus!
Direct experience with Agency Management Systems (Applied EPIC, Nowcerts, EZLynx, etc.)
Familiarity with carrier data formats (Accord XML, IVANS AL3)
Experience with BI tools (Tableau, Looker, Power BI)