Experience: 10 - 15+ years
Role Overview
We are seeking a Palantir Expert to lead the design, implementation, and scaling of enterprise data and AI platforms using Palantir Foundry and AIP. This role requires a unique blend of data engineering, platform architecture, and business problem-solving, with the ability to translate complex data into operational applications that drive real-time decision-making.
Key Responsibilities:
1. Architecture & Solution Design
- Define end-to-end architecture using Palantir Foundry (data ingestion, ontology, pipelines, applications).
- Design scalable, secure, and high-performance data platforms.
- Lead ontology modeling aligned with business entities and workflows.
- Architect real-time and batch data processing solutions.
2. Platform Implementation (Foundry)
- Lead implementation of data pipelines (Code Repos, Pipeline Builder), ontology and object modeling, and Foundry applications (Workbench, Slate, operational apps).
- Ensure best practices in data lineage, versioning, governance, and access control.
3. AI / Advanced Analytics (AIP)
- Design and implement AI/ML and GenAI use cases using AIP.
- Enable decision automation and agentic workflows.
- Integrate LLMs and ML models into business processes.
4. Stakeholder & Client Engagement
- Work closely with business stakeholders to define use cases and translate business problems into Foundry solutions.
- Act as a trusted advisor to client leadership.
Required Qualifications:
- Experience: 10+ years in data engineering / data platform architecture, with at least 3+ years of hands-on experience with Palantir Foundry, Ontology, and AIP.
- Proven Track Record: Experience delivering enterprise-scale data platforms and working in Forward Deployed Engineer (FDE)-like roles.
- Technical Skills: Strong expertise in distributed data processing (Spark, PySpark), data modeling, APIs, and system integrations.
- Tools: Hands-on experience with Foundry Ontology, Pipeline Builder, and Foundry application development. Exposure to Cloud platforms (AWS / Azure / Google Cloud Platform), SQL, Python, and tools like Databricks, Snowflake, or dbt is preferred.