Skills
- Hadoop
- HDFS
- Spark
- Hive
- Scala/PySpark
Job Description
Years: 10+ years of experience in Hadoop big data development
Must-Have: Hadoop, HDFS, Spark, Hive, Scala/PySpark
Lead the design and development of Hadoop-based applications, tools, and frameworks.
Design and implement complex data processing pipelines using Hadoop ecosystem tools such as MapReduce, Spark, and Hive.
Experience with SQL and NoSQL databases.
Manage CI / CD automation and Knowledge on Bitbucket / Jenkins Integration.
Understanding on docker, Kubernetes, and OpenShift will be a plus.
Understanding of how container runtimes work is a big plus.
Interpersonal communications skills, to interface with customers, peers, and management.