Irvine, California
•
Yesterday
Roles and Responsibilities: Develop, implement, and design data pipelines and ETL processes to efficiently ingest, transform, and load large volumes of data.Collaborate with cross-functional teams to gain insights into data requirements and devise scalable solutions for data storage, processing, and retrieval.Fine-tune and optimize data processes to ensure exceptional performance, reliability, and data integrity.Utilize PySpark, Spark, Hadoop, to build robust data solutions.Keep abreast of the l
Easy Apply
Contract
Depends on Experience









