Hybrid in Racine, Wisconsin
•
6d ago
Design and implement end-to-end data architecture (Data Lakehouse + Data Warehouse) Develop and optimize ETL/ELT pipelines using Databricks (PySpark, Spark SQL) Integrate and migrate data between Teradata and Databricks platforms Build and maintain dimensional data models (star/snowflake schema, SCDs) Ensure data quality, governance, lineage, and security compliance Optimize performance of: Spark jobs (partitioning, caching, cluster tuning) Teradata queries (indexes, partitioning, statist
Easy Apply
Contract, Third Party
Depends on Experience


