Overview
Skills
Job Details
We are seeking an experienced Hadoop Engineer with a strong background in data migration, ETL, and Hadoop ecosystems to support mission-critical data initiatives in the banking sector. The ideal candidate will have hands-on expertise working in Hive and Spark environments, migrating data from MS SQL Server to Hadoop/Hive, and building scalable, performant data models.
This role requires prior experience in the banking or financial services industry and a deep understanding of data governance, compliance, and security protocols within that context.
Eligibility:USC/EAD
Key Responsibilities:
Operate effectively in Hadoop/Hive environments, leveraging Apache Spark
Perform data migration from MS SQL Server to Hadoop/Hive platforms
Develop and optimize SQL-based ETL pipelines
Work with data modeling and large-scale structured/unstructured datasets
Collaborate with DevOps using Jenkins, GitHub, and Confluence
Partner with cross-functional teams to support data warehousing and analytics projects
Follow best practices in data security, privacy, and compliance standards relevant to banking
Must-Have Skills:
BANKING INDUSTRY EXPERIENCE (Required)
Strong experience in Hadoop, Hive, and Spark environments
Proven hands-on work migrating data from MS SQL Server to Hadoop/Hive
Proficient in SQL Server, ETL, and data modeling
Familiarity with Big Data ecosystems and architecture
Experience with Confluence, Jenkins, and GitHub
Preferred Qualifications:
Experience with performance tuning in Spark/Hive
Familiarity with data quality frameworks and automation
Knowledge of compliance standards (e.g., SOC2, PCI DSS) in financial environments
Strong problem-solving skills and ability to work independently