Senior Java/Scala/Python backend developer (data engineer) *** Direct end client, 100% remote work from home ***

Amazon Redshift, Amazon Web Services, Apache Hadoop, Apache Hive, Apache Spark, Data engineering, Database, Java, Oracle, Scala, Software development, Vertica, Java backend developer, Scala developer, Java data engineer, Scala data engineer, Amazon EC2, Amazon EMR, Amazon S3, Architecture, Big data, Cloud, Computer science, Data marts, Data processing, Data warehouse, ETL, Agile, Business management, CAN, EMR, Engineering, Estimating, JSON, JVM, JavaScript, Linux, MPP, Project planning, QA, RESTful, SDLC, Scripting language, Scrum, Python, Shell, Software, Software development methodology, Writing, Spring, XML, Streaming, Troubleshooting
Contract W2, Contract Independent, Contract Corp-To-Corp, 12 Months
Depends on Experience
Travel not required

Job Description

Immediate need for senior Java/Scala developer who has experience in data engineering

This is 100% remote work from home role

1. Write Java/Scala code to run on Spark for data engineering 
2. Load data from Hadoop/Hive to Amazon S3
3. Write data transformations using Java/Scala that can execute of Spark
4. Good understanding of database technologies
5. Prior experience on data ingestion into Redshift from S3
6. Strong programming skills either in Java or Scala
7. Strong dats structures experience
8. Experience on other databases like Vertica, Oracle along with Java or Scala

About You
• BS or MS in Computer Science or related field.
• 10+ years of core software development experience with work experience in developing DB schemas, creating ETLs, and familiar with MPP/Hadoop systems.
• Working Knowledge of XML, JavaScript, JSON, YML and Linux
• Advanced experience with scripting language – Python or Shell is a must have
• Big Data: Building and maintaining highly scalable, robust & fault-tolerant complex data processing pipelines
• Strong knowledge of software development methodologies and practices
• Experience working in Agile development teams; working knowledge of Agile (Scrum) development methodologies
• Experience with Amazon web services: EC2, S3, and EMR (Elastic Map Reduce) or equivalent cloud computing approaches
• Strong expertise in writing analytical SQLs, Data Marts, Data Warehousing and analytic architecture
• Experience working with large data volumes• Hands on experience with Hadoop stack of technologies – mainly Hive, Hive on spark
• Experience creating and consuming JSON/REST web services and communicating with systems.
• Skilled in developing Software for Java (Spring & Springboot), Scala for spark streaming & spark applications, or other JVM based languages.

About Role
• 70-85% hands-on development in all phases of the software life cycle.
• Designing/developing ETL jobs across multiple big data platforms and tools including S3, EMR, Hive, Vertica
• Designing end to end data pipeline given business and ops requirements (ingestion, processing and storage).
• Conduct design and code reviews
• Defect remediation• Estimates and sequence of individual activities as inputs to project plans
• Analyzes and synthesizes a variety of inputs to create software and services.
• Identify dependencies as inputs to project plans
• Collaborates effectively with peer engineers and architects to solve complex problems spanning their respective areas to deliver end-to-end quality in our technology and customer experience.
• Influences and communicates effectively with non-technical audiences including senior product and business management.
• Designing/developing ETL jobs across multiple big data platforms and tools including S3, EMR, Hive, Vertica
• Designing end to end data pipeline given business and ops requirements (ingestion, processing and storage).

 

Posted By

Santa Clara, CA, 95050

Dice Id : 10126850
Position Id : JAVA-SCALA
Originally Posted : 6 months ago
Have a Job? Post it