Date Engineer
Contract
Plantation, FL ( 4 days onsite 1 day remote)
Strong Python and SQL
Experience with Snowflake or Databricks or similar
Role Summary
The Intermediate Data Engineer will contribute to the development and maintenance of a
scalable data platform and analytical environment. The role involves building reliable data
pipelines, modelling and transforming data for analytics, and ensuring data quality across
systems.
You will work closely with senior data and analytics engineers and business stakeholders to
contribute to our Databricks-based Medallion architecture, all the while continuing to develop
your technical and architectural expertise.
This role reports to the Director of BI & Analytics and supports initiatives led by the VP of Data &
BI.
Key Responsibilities
Data Engineering & Pipelines
- Design, build, and maintain scalable ETL/ELT pipelines using Databricks and cloud-based data services.
- Develop reusable ingestion and transformation frameworks for structured and semi-structured data.
- Maintain and enhance datasets across the Medallion architecture (Bronze, Silver, Gold). Data Modeling & Transformation
- Develop and maintain data models, transformations, and analytics-ready datasets using dbt.
- Write and optimize complex SQL queries across large and diverse datasets.
- Support schema design and performance optimization for analytical workloads. Data Quality & Reliability
- Implement data quality checks, validations, and monitoring to ensure accuracy, completeness, and consistency.
- Troubleshoot data issues and perform root cause analysis in collaboration with stakeholders. Collaboration & Delivery
- Partner with analytics, product, and business teams to gather requirements and deliver reliable data solutions.
- Participate in code reviews, technical discussions, and continuous improvement initiatives.
- Contribute to CI/CD practices using Git and infrastructure-as-code tools. Documentation & Operations
- Document data models, pipelines, and technical processes.
- Support platform operations such as job monitoring, access control, and performance tuning under guidance from senior engineers.
Required Qualifications
- 1 3 years of professional experience in a data engineering or analytics engineering role.
- Strong SQL skills, including writing complex and optimized queries.
- Hands-on experience with Python or PySpark for data processing.
- Experience using dbt for data transformation and modeling.
- Familiarity with a modern cloud data platform (Databricks, Snowflake, BigQuery, or Redshift).
- Experience with Git-based version control (GitHub, GitLab, or similar).
- Solid understanding of ETL/ELT patterns, data modeling concepts, and data quality principles.
- Strong communication skills and ability to explain technical concepts to non-technical partners.
Preferred Qualifications
- Experience working with Databricks, including basic platform administration.
- Exposure to Terraform or other infrastructure-as-code tools.
- Familiarity with CI/CD pipelines and DevOps practices.
- Experience integrating data from APIs or streaming sources.