Overview
Skills
Job Details
We are seeking a highly experienced Big Data Engineer with a minimum of 8 years in the industry to work on enterprise-grade data processing solutions. The role demands deep expertise in the Hadoop ecosystem, Apache Spark, and cloud-based big data services.
Key Responsibilities:
Design, develop, and maintain large-scale distributed data processing pipelines.
Work with massive data sets using Spark, Hadoop, and related frameworks.
Implement efficient and secure ingestion frameworks using Kafka, NiFi, or Flume.
Collaborate with data architects, scientists, and application teams.
Optimize performance and scalability of big data platforms.
Ensure compliance with data governance and security policies.
Required Qualifications:
Bachelor s, Master s, or Ph.D. in Computer Science, IT, or related field.
10+ years of relevant experience with a strong focus on Big Data technologies.
Expertise in Hadoop (HDFS, MapReduce, YARN, Hive, HBase, Pig, Oozie).
Strong coding skills in Spark using Scala, Python, or Java.
Proficiency in data ingestion and streaming platforms (Kafka, NiFi, Flume).
Experience with AWS, Azure, or Google Cloud Platform Big Data offerings.
Familiarity with data modeling, NoSQL, and relational databases.
Knowledge of Docker, Kubernetes, and CI/CD pipelines.
Strong communication, documentation, and troubleshooting abilities.