Remote
•
Yesterday
Key ResponsibilitiesDevelop and maintain data pipelines using Spark (PySpark/Scala/Java)Work with HBase and Hadoop ecosystem tools (Hive, Impala, Hue)Process large datasets in distributed environmentsCollaborate with backend teams for data integrationOptimize performance of data processing jobsSupport data ingestion, transformation, and storage solutionsParticipate in testing and performance tuningRequired SkillsStrong experience with Apache Spark (must-have)Hands-on with HBase (must-have)Good p
Easy Apply
Contract, Third Party
Depends on Experience











