Data Engineer Palantir & PySpark with Reinsurance Domain Experience (Remote)

Overview

Remote
$50 - $60
Accepts corp to corp applications
Contract - W2
Contract - Independent
Contract - 12 Month(s)

Skills

Palantir
PySpark

Job Details

Data Engineer Palantir & PySpark

Experience: 6 10 Years

Location : Remote

Client Industry: Reinsurance

No of Positions: 5

PLEASE SHARE PROFILES WITH FULL EDUCATIONAL DETAILS AND LINKEDIN ID TO GET IMMEDIATE RESPONSE.

Job Summary:

We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred), PySpark, and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in building scalable data pipelines, optimizing ETL workflows, and enabling advanced analytics and reporting capabilities. This role requires a strong technical foundation in data engineering combined with an understanding of the reinsurance business domain.

Key Responsibilities:

  • Design, develop, and maintain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry.
  • Collaborate with data architects, business analysts, and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats.
  • Build and optimize data ingestion, transformation, and validation processes to support analytical and reporting use cases.
  • Work within the Palantir Foundry platform to design robust workflows, manage datasets, and ensure efficient data lineage and governance.
  • Ensure data security, compliance, and governance in line with industry and client standards.
  • Identify opportunities for automation and process improvement across data systems and integrations.

Required Skills & Qualifications:

  • 6 10 years of overall experience in data engineering roles.
  • Strong hands-on expertise in PySpark (dataframes, RDDs, performance optimization).
  • Proven experience working with Palantir Foundry or similar data integration platforms.
  • Good understanding of reinsurance including exposure, claims, and policy data structures.
  • Proficiency in SQL, Python, and working with large datasets in distributed environments.
  • Experience with cloud platforms (AWS, Azure, or Google Cloud Platform) and related data services (e.g., S3, Snowflake, Databricks).
  • Knowledge of data modeling, metadata management, and data governance frameworks.
  • Familiarity with CI/CD pipelines, version control (Git), and Agile delivery methodologies.

Preferred Skills:

  • Experience with data warehousing and reporting modernization projects in the reinsurance domain.
  • Exposure to Palantir ontology design and data operationalization.
  • Working knowledge of APIs, REST services, and event-driven architecture.
  • Understanding of actuarial data flows, submission processes, and underwriting analytics is a plus.

Best Regards,

Rakesh Sharma

E-mail:

Hangout:

Web:

Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.

About Marici Solutions