Connecticut State
Job Description:
>> Design and develop end-to-end data pipelines using Palantir Foundry (Code Repositories Pipeline Builder Transforms Workshop Contour etc. as applicable)
>> Build and maintain curated datasets data lineage and data quality controls across ingestion transformation and serving layers
>> Develop data transformations using Python PySpark SQL and optimize pipeline performance for large-scale datasets
>> Implement Foundry Ontology (objects actions relationships) and enable operational workflows through ontology-driven applications
>> Develop and support Foundry applications dashboards for business use-cases (where required) using Foundry tools (e.g. Workshop Contour)
>> Collaborate with stakeholders to understand requirements translate them into data products and deliver iteratively using Agile practices
>> Ensure security governance and compliance using Foundry access controls data policies and auditing best practices
>> Integrate Foundry with enterprise systems via APIs connectors and support data interoperability patterns
>> Establish CICD practices for Foundry code repositories perform peer code reviews and enforce coding standards
>> Troubleshoot production issues perform root-cause analysis and drive continuous improvements in reliability and performance
Requirements:
>> Strong hands-on experience with Palantir Foundry - Transforms and Pipeline Builder, Foundry Code Repositories, Data modeling, and curated datasets, Foundry ontology concepts (preferred for senior roles)
>> Experience in implementing data governance, access controls, and platform best practices
>> Strong coding skills in Python (mandatory) PySpark strongly preferred
>> Advanced SQL skills (complex joins, window functions, optimization)
>> Experience with batch near-real-time data processing patterns
>> Strong understanding of data warehousing concepts, dimensional modeling, and data quality checks