Overview
Skills
Job Details
Data Engineer Palantir & PySpark
Experience: 6 10 Years
Location : Remote
Client Industry: Reinsurance
No of Positions: 5
PLEASE SHARE PROFILES WITH FULL EDUCATIONAL DETAILS AND LINKEDIN ID TO GET IMMEDIATE RESPONSE.
Job Summary:
We are seeking a highly skilled Data Engineer with hands-on experience in Palantir (Foundry preferred), PySpark, and exposure to reinsurance or insurance data environments. The ideal candidate will play a key role in building scalable data pipelines, optimizing ETL workflows, and enabling advanced analytics and reporting capabilities. This role requires a strong technical foundation in data engineering combined with an understanding of the reinsurance business domain.
Key Responsibilities:
- Design, develop, and maintain data pipelines and ETL workflows using PySpark, SQL, and Palantir Foundry.
- Collaborate with data architects, business analysts, and actuarial teams to understand reinsurance data models and transform complex datasets into usable formats.
- Build and optimize data ingestion, transformation, and validation processes to support analytical and reporting use cases.
- Work within the Palantir Foundry platform to design robust workflows, manage datasets, and ensure efficient data lineage and governance.
- Ensure data security, compliance, and governance in line with industry and client standards.
- Identify opportunities for automation and process improvement across data systems and integrations.
Required Skills & Qualifications:
- 6 10 years of overall experience in data engineering roles.
- Strong hands-on expertise in PySpark (dataframes, RDDs, performance optimization).
- Proven experience working with Palantir Foundry or similar data integration platforms.
- Good understanding of reinsurance including exposure, claims, and policy data structures.
- Proficiency in SQL, Python, and working with large datasets in distributed environments.
- Experience with cloud platforms (AWS, Azure, or Google Cloud Platform) and related data services (e.g., S3, Snowflake, Databricks).
- Knowledge of data modeling, metadata management, and data governance frameworks.
- Familiarity with CI/CD pipelines, version control (Git), and Agile delivery methodologies.
Preferred Skills:
- Experience with data warehousing and reporting modernization projects in the reinsurance domain.
- Exposure to Palantir ontology design and data operationalization.
- Working knowledge of APIs, REST services, and event-driven architecture.
- Understanding of actuarial data flows, submission processes, and underwriting analytics is a plus.
Best Regards,
Rakesh Sharma
E-mail:
Hangout:
Web: