- Big Data
Big Data, Java, Hadoop, Kafka, Spark, GitHub
Detailed Job Description: Big Data Developer (Java & Hadoop)
- Designs, develops, and implements Big Data streaming applications with Scala/Java to support business requirements.
- Follows approved life cycle methodologies using standard software frameworks, perform coding and testing and operational support
- Resolves technical issues through debugging, research, and investigation. Relies on experience and judgment to plan and accomplish goals.
- Performs a variety of tasks. A degree of creativity and latitude is required.
- Codes software applications to adhere to designs supporting internal business requirements or external customers.
- Standardizes the quality assurance procedure for software. Oversees testing and develops fixes.
- Contribute to the Design and develop high quality software for large-scale Java/Scala distributed systems using Databricks and AWS Cloud
- Ingest and process streaming data sets using appropriate technologies including but not limited to, AWS Cloud (Kinesis, S3, Lambda), Spark, and Kafka.
- 5 - 8 years of programming experience in Java/Scala, preferably in Big Data space
- Good knowledge of standard concepts, practices, and procedures within a particular field.
- Experience with Databricks, Kafka, spark and AWS services like s3, kinesis, lambda
- Good understanding of Big Data concepts and experience with GitHub and CI/CD tools
Good to have skills:
- Strong coding skills, meeting delivery schedules consistently
- DevOps - 90% development, 10% ops support of products we build
- On-call rotation (reachable by phone) and support of data issues when on call 24X7