• Must have qualifications –
o 15+ years of IT experience with at least 5-7 years in Big Data project experience.
o Strong Hadoop eco system technical knowledge covering all Hadoop umbrella (HDFS, Pig, Map Reduce, Spark, HBase, Hive, HCatalog, Phoenix, Flume, Atlas, Ranger)
o Knowledge in Spark & Scala – architecture, design, troubleshoot, performance tuning
o Knowledge in Kafka and / or other streaming technologies
o Experience with NoSQL databases like HBASE, Mongo or Cassandra
o Experience in developing architecture on Data Lake, ingestion and consumption frameworks
o Should demonstrate technical thought leadership. Should be ready to conduct POC either himself or engaging some developers for pilot / evaluation phase works
o Ability to clearly articulate technical complexity to a variety of audience – PMs, other architects (Enterprise Architects, Solution Architects or Data Architects). Ability to explain and justify the architecture and technology choices.
o Demonstrate strong knowledge in Hadoop Best Practices, Frameworks, Troubleshooting and Performance tuning. Demonstrates broad knowledge of technical solutions, design patterns, and code for medium/complex applications deployed in Hadoop production.
o Experience with change management / DevOps tools (GitHub / Jenkins etc.)
o Participate in design reviews, code reviews, unit testing and integration testing.
o Experience in SDLC Methodology (Agile / Scrum / Iterative Development).
• Nice-to-have qualifications
o Working experience in the traditional data warehousing and Business Intelligence systems
o Business requirements management and Systems change / configuration management. Familiarity with JIRA.
o Strongly preferred – Healthcare experience
o Strongly preferred – Have PMP, PMI-ACP, CSM, or TOGAF certification