Seeking a Data Engineer to support a modern cloud data platform built on a lakehouse architecture. This role combines hands-on engineering with business collaboration translating requirements into scalable data solutions and owning delivery from design through production support.
You ll work cross-functionally to define data needs, design pipelines, and deliver clean, reliable datasets for analytics, reporting, and machine learning. Responsibilities include ingesting and transforming structured and unstructured data, implementing data quality controls, and supporting governance standards. You ll also help optimize data models, ensure operational readiness, and promote engineering best practices.
Key work includes building ETL/ELT pipelines, integrating systems using Databricks, Spark, SQL, and AWS, and enabling real-time and batch processing. You ll partner closely with BI and data science teams to deliver scalable solutions aligned with performance, cost, and maintainability goals.
Requirements:
- 3+ years in data engineering
- Strong Databricks experience (Jobs, Spark, Delta Lake, DLT, Unity Catalog)
- Solid SQL and Spark skills
- Experience with Kafka and AWS (S3, IAM, etc.)
- Background in building production data pipelines and working in Agile environments
Nice to have: Snowflake, NoSQL, or enterprise messaging tools.
Contract role (6+ months, renewable). Remote (EST hours) or hybrid in Atlanta. U.S. work authorization required.