Overview
Skills
Job Details
Key Responsibilities:
Hadoop developers for the Graph Platform/ Neo4j / Tiger Graph Developer
Primary Skill
Hadoop
Secondary Skill
Hive
Tertiary Skill
Spark
Required Qualifications
6+ years of hands-on experience in software development or data science
Support the companys commitment to protect the integrity and confidentiality of systems and data.
Experience building E2E analytics platform using Streaming, Graph and Big Data platform
Experience with Graph-based data workflows and working with Graph Analytics
Extensive hands on experience in designing, developing, and maintaining software frameworks using Kafka, Spark, Neo4J, Tiger Graph DB,
Hands one experience on Java, Scala or Python
Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN
Experience managing and leading small development teams in an Agile environment
Drive and maintain a culture of quality, innovation and experimentation
Collaborate with product teams, data analysts and data scientists to design and build data-forward solutions
Provide the prescriptive point-solution architectures and guide the descriptive architectures within assigned modules
Own technical decisions for the solution and application developers in the creation of architectural decisions and artifacts
Manage day-to-day technology architectural decision for limited number of specified assigned modules including making decision on best path to achieve requirements and schedules.
Own the quality of modules being delivered, insure proper testing and validation processes are followed.
Ensure the point-solution architectures are in line with the enterprise strategies and principles
Reviews technical designs to ensure that they are consistent with defined architecture principles, standards and best practices.
Accountable for the availability, stability, scalability, security, and recoverability enabled by the design
Desired Qualifications
Agile
********************************************************************************************
Hadoop
Hive
Spark