DESIRED SKILLS AND EXPERIENCE:
Minimum 6 years’ of relevant experience
- Help product owner in the creation of a prioritized backlog Design and develop ETL migration strategy, explore automated refactoring options, design ETL test strategy RapidIT
- Perform impact analysis of refactoring Talend Big Data Pipeline Components target from on-prem Hadoop (MarR) to Cloud Platform (Google Cloud Platform DataProc)
- Architect, design refactoring methods for refactoring Talend Big Data Pipelines
- Design, Script, Automate and implement data pipeline target refactoring process for bulk migration and testing of ETLs.
- Perform Talend deployment, DevOps, repository mangementing and develop ETL migration strategy, explore automated refactoring options, design ETL test strategy
- Define the data architecture and migration path form structure and unstructured data on on-prem big data platforms to cloud
- Architect the target big data architecture and data zones, file formats , data stores for cloud big data platforms with minimal impact to the structure.
- Design partitions, clustered indexes, storage tiers etc.
- Analyze the current state environment , filesystem utilization, compute utilization, performance
- Analyze data ingestion process, file formats, DQ / transformation / aggregation process, constraints etc.
- Analyze ado SQLs, scheduled workloads , workload patterns ( structured & unstructured ) from BQ migration perspective
- Well-developed analytical & problem-solving skills
- Strong oral and written communication skills
- Excellent leadership skills with ability lead and guide and groom the team
- Excellent team player, able to work with virtual teams
- Ability to learn quickly in a dynamic start-up environment
- Able to talk to client directly and report to client/onsite
- Flexibility to work on different Shifts and Stretch
- Flexible to travel and relocate in India and Abroad