Data Engineer (MDM, Python, Kafka, Financial) - NJ Hybrid - 12+ Yrs

Overview

Hybrid
Depends on Experience
Contract - Independent
Contract - W2
Contract - 12 Month(s)

Skills

Data Engineer
MDM
Spark
Python
FInancial

Job Details

Role: Data Engineer (MDM)

Exp: 12+ Yrs

Location: Iselin, NJ (Hybrid)

Note: In-Person Interview in Iselin MUST

Required Qualifications:
  • 12+ years of experience in data engineering, with a proven track record of MDM implementations, preferably in the financial services industry.
  • Extensive hands-on experience designing and deploying MDM solutions and comparing MDM platform options.
  • Experience with financial systems (capital markets, credit risk, and regulatory compliance applications).
  • Strong functional knowledge of reference data sources and domain-specific data standards.
  • Expertise in Python, Pyspark, Kafka, microservices architecture (particularly GraphQL), Databricks, Snowflake, Azure Data Factory, SQL, and orchestration tools such as Airflow or Astronomer.
  • Familiarity with CI/CD practices, tools, and automation pipelines.
  • Ability to work collaboratively across teams to deliver complex data solutions.
Key Responsibilities:
  • Lead the design, development, and deployment of comprehensive MDM solutions across the organization, with an emphasis on financial data domains.
  • Demonstrate extensive experience with multiple MDM implementations, including platform selection, comparison, and optimization.
  • Architect and present end-to-end MDM architectures, ensuring scalability, data quality, and governance standards are met.
  • Evaluate various MDM platforms (e.g., Informatica, Reltio, Talend, IBM MDM, etc.) and provide objective recommendations aligned with business requirements.
  • Collaborate with business stakeholders to understand reference data sources and develop strategies for managing reference and master data effectively.
  • Implement data integration pipelines leveraging modern data engineering tools and practices.
  • Develop, automate, and maintain data workflows using Python, Airflow, or Astronomer.
  • Build and optimize data processing solutions using Kafka, Databricks, Snowflake, Azure Data Factory (ADF), and related technologies.
  • Design microservices, especially utilizing GraphQL, to enable flexible and scalable data services.
  • Ensure compliance with data governance, data privacy, and security standards.
  • Support CI/CD pipelines for continuous integration and deployment of data solutions.

Regards,

Prakash

732-7 9 0 -5440

prakash.v(@)primesoftinc(.)com

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.