The candidate will lead a team of architects responsible for managing the Data Architecture processes (modeling, provisioning, instantiation, and persistence) which include data lake and data warehouse platforms and interfaces (e.g. real-time, batch, on-demand, streaming). The candidate will have results driven leadership experience leveraging the latest Big Data technologies and products for large advanced analytics communities, including hands on evaluations and in-depth research to ensure solid investment roadmap. The candidate must have a solid understanding of Hadoop and Big Data open source solutions such as Spark, Kafka, Hive, Pig, HBase, and Elasticsearch. Experience with system management tools and system usage and optimization tools a plus. The candidate will also have experience building relationships and influencing cross functional teams across a large organization.
- Provide direct oversight, coaching, and leadership for a skilled architecture team who have process oversight, KPI development, and architecture development &guidance responsibilities
- Develop an effective, coherent, reliable and phased enterprise data architecture approach to help the business grow and change
- Develop a roadmap for the enterprise data platforms for advanced analytics and data science
- Map business opportunities to appropriate data architecture patterns as business strategies and technology mature.
- Develop and maintain processes to acquire, analyze, store, cleanse, and transform large datasets using tools like Spark, Kafka, Sqoop, Hive, NiFi, HBASE and MiNiFi
- Provide recommendations, technical direction and leadership for the selection and incorporation of new technologies into the Hadoop ecosystem
- Contribute to the development, review, and maintenance of requirements documents, technical design documents, and functional specifications
- Help design innovative, customer-centric solutions based on deep knowledge of large-scale, data-driven technology and the financial services industry
- Help develop and maintain enterprise data standards, best practices, security policies and governance processes for the Hadoop ecosystem
- Perform other duties and/or special projects as assigned
- Bachelor's degree with minimum 9 years of technology experience OR in lieu of the Bachelor's degree minimum 12 years of experience in Technology
- 7 years’ experience leading, coaching, and developing talent in a technical function/organization
- 4 years’ experience as an Architect with hands on working experience on Hadoop, Spark, Kafka, MapReduce, HDFS, Hive, Pig Sqoop and Oozie
- Experience in AI, BI and machine learning projects
- Experience working on AWS, Azure or other cloud providers for big data
- Credit card/payment experience
- Strong background in Financial Services
- Extensive experience working with data warehouses and big data platforms
- Experience working in real-time data ingestion
- Experienced in sourcing and processing structured, semi-structured and unstructured data
- Experience in working No-SQL data stores such as Hbase / Cassandra / HAWQ DB etc.
- Experience in Data Cleansing/Transformation, Performance Tuning.
- Experience in leveraging Apache Atlas for data governance.
- Experience in Storm, Kafka, and Flume would be a plus.
- Experience in Java and Spring would be a plus.
- Experience working on Ab Initio would be a plus.
- Hortonworks or Cloudera or MapR Certification would be a plus
- Basic knowledge of Big Data administration (Ambari).
- Demonstrated experience building strong relationships with senior leaders
- Strong leadership and influencing skills
- Outstanding written and verbal skills and the ability to influence and motivate teams