Remote
•
Today
Key Responsibilities Design and develop ETL/ELT pipelines using PySpark, Spark SQL Build data ingestion pipelines (batch + streaming) Implement Lakehouse architecture using Delta Lake Optimize Spark jobs for performance and cost Manage data workflows and job scheduling Ensure data quality, governance, and security Integrate Databricks with cloud platforms (AWS, Azure, Google Cloud Platform) Collaborate with data scientists and analysts Required Skills Python, SQL Spark / PySpark Delta Lake Data
Easy Apply
Contract
Depends on Experience




