Remote
•
Today
Key ResponsibilitiesDesign, develop, and maintain scalable ETL pipelines and data processing applicationsBuild and optimize data workflows using PySpark, Java, and Hadoop ecosystem toolsAnalyze business and technical requirements to produce detailed implementation designsPerform unit testing, integration testing, and debugging of applicationsTroubleshoot and resolve performance issues related to high-volume data processingDevelop and maintain SQL queries, stored procedures, and database objectsW
Easy Apply
Contract
Depends on Experience









