Role: Sr. Palantir Developer.
Location: Remote
Long Term Contract
Primary Skill: Palantir , PySpark , python , Typescripts
Secondary skill: Sql , ETL , Snowflake
Location: Anywhere in US willing to cover CST or EST timezone. (Better to be in Dallas if possible).
Prior AT&T experience is added advantage.
Job Description:
Data Engineering & Pipeline Development
- Design and maintain end-to-end data pipelines using Foundry Pipeline Builder and Code Repositories
- Ingest, transform, and curate datasets (batch + incremental processing)
- Ensure data quality, reliability, and lineage tracking
- Optimize pipelines for performance and scalability
Foundry Workshop Application Development
- Build and maintain user-facing applications using Workshop modules
- Translate business requirements into interactive dashboards and operational tools
- Integrate ontology-driven logic into applications
- Enable business users with actionable insights through curated views
Code Repository Development (TypeScript & Python)
- Develop reusable logic using:
- TypeScript (e.g., functions, business logic, actions)
- Python (data transformations, pipeline logic, automation)
- Maintain version-controlled solutions in Foundry Code Repos
- Implement testing, debugging, and CI/CD best practices
Ontology & Data Modeling
- Design and maintain ontology object models and relationships
- Map datasets to ontology and define business semantics
- Build actions, rules, and automations tied to ontology objects
Automation & Monitoring
- Develop automations using Object Monitoring / Sentinel
- Set up alerts and conditional workflows for data issues
- Troubleshoot pipeline failures and ensure SLA adherence
Collaboration & Stakeholder Engagement
- Work with business teams to understand data needs
- Partner with data scientists, analysts, and product teams
- Document architecture, pipelines, and workflows