Role : Hadoop Data Engineer
Location : Scottsdale AZ (100%)
Hire Type : Contract
Key Responsibilities
Design, develop, and maintain scalable data processing applications using Java (8 or above) and Spring Boot.
Build and optimize big data pipelines using Spark and Scala for large-scale data processing.
Develop and consume RESTful web services for data integration and platform interoperability.
Implement and manage batch and streaming data pipelines using Spark Streaming and Kafka.
Write optimized Hive queries and SQL, focusing on performance and scalability.
Work on distributed data platforms leveraging Hadoop ecosystem components.
Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
Ensure code quality through reviews, version control, and CI/CD best practices.
Troubleshoot and resolve performance, scalability, and data reliability issues.
Required Skills & Qualifications
8 12 years of hands-on experience in Java (version 8 or above).
Strong expertise in Spring Boot framework.
Excellent proficiency in Apache Spark and Scala.
Strong experience with Hive, SQL optimization, Hadoop, and Kafka.
Solid understanding of RESTful web services.
Experience with version control systems such as Git / GitLab.
Hands-on experience with build tools like Maven and/or Gradle.
Strong understanding of distributed systems and big data architecture.
Experience with streaming frameworks such as Spark Streaming and Kafka Streams.
Experience working in large-scale enterprise data platforms.
Exposure to performance tuning and capacity planning for big data systems.
Knowledge of DevOps or CI/CD pipelines is a plus.