Hybrid in Boston, Massachusetts
•
Today
Key Responsibilities Design and develop scalable data pipelines using Databricks (Delta Lake, Spark) Build and maintain ETL/ELT workflows for large-scale data processing Optimize data architecture for performance, reliability, and cost-efficiency Work with structured and unstructured data from multiple sources Collaborate with data scientists, analysts, and business stakeholders Implement data quality, validation, and governance frameworks Troubleshoot and resolve data pipeline and performance i
Easy Apply
Contract
Depends on Experience
