Overview
Remote
Depends on Experience
Accepts corp to corp applications
Contract - Independent
Contract - W2
Contract - 12 Month(s)
Skills
Senior Hadoop Developer
HDFS
MapReduce
Hive
Pig
Spark
Job Details
Position Senior Hadoop Developer (10+ Years Experience)
Location Atlanta, GA - Remote
Duration 1+ Year (Extendable)
Job Description:
Job Summary:
We are seeking an experienced Senior Hadoop Developer with over 10 years of hands-on experience in big data ecosystems. The ideal candidate will have a strong background in designing, developing, and implementing large-scale data processing solutions using Hadoop technologies. You will play a critical role in architecting and optimizing data pipelines to support advanced analytics and business intelligence needs.
Key Responsibilities:
- Design and implement scalable, high-performance big data solutions using the Hadoop ecosystem.
- Develop data pipelines using tools like Apache Hive, Pig, Spark, Sqoop, Flume, Oozie, and Kafka.
- Optimize and troubleshoot performance of Hadoop jobs and queries.
- Build ETL frameworks and workflows to process large datasets efficiently.
- Work closely with data architects, analysts, and business teams to understand data requirements.
- Ensure data quality, consistency, and security across all stages of processing.
- Perform code reviews, mentor junior developers, and enforce best practices.
- Participate in architectural discussions and contribute to long-term strategic planning.
Required Skills & Experience:
- 10+ years of IT experience with at least 6+ years in Hadoop development.
- Expertise in HDFS, MapReduce, Hive, Pig, Spark (Core & SQL), YARN, and HBase.
- Strong programming skills in Java, Scala, and Python.
- Experience with Apache Kafka for real-time data processing.
- Proficiency in data ingestion tools such as Sqoop and Flume.
- Experience with workflow schedulers like Oozie, Airflow, or Control-M.
- Familiarity with cloud platforms (AWS EMR, Azure HDInsight, or Google Dataproc).
- Deep understanding of data modeling, schema design, and data partitioning strategies.
- Proficient in writing optimized SQL queries for data extraction and transformation.
- Knowledge of CI/CD pipelines and version control using Git.
- Experience with data security, governance, and compliance frameworks.
Preferred Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Experience with NoSQL databases (MongoDB, Cassandra).
- Exposure to containerization (Docker) and orchestration (Kubernetes).
- Familiarity with ML model integration in data pipelines.
Soft Skills:
- Excellent analytical, problem-solving, and communication skills.
- Proven ability to lead technical projects and mentor teams.
- Strong attention to detail and a passion for clean, maintainable code.
- Ability to work in an Agile environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.