via LinkedIn
$150K - 200K a year
Design and optimize scalable data pipelines and transformations using Palantir Foundry and Databricks technologies, ensuring data quality and governance.
8+ years data engineering experience with hands-on skills in Palantir Foundry, PySpark, Python, SQL, Databricks, cloud platforms, and CI/CD practices.
Detailed Job Description: We are looking for a versatile Data Engineer with strong experience in Palantir Foundry and modern data engineering tools such as Databricks, PySpark, and Python. This role involves designing and building scalable data pipelines, managing transformations, and enabling analytics and operational workflows across enterprise platforms. You will work closely with business stakeholders, data scientists, and product teams to deliver high-quality, governed, and reusable data assets that power decision-making and advanced analytics. Minimum years of experience*: >10 years Key Responsibilities • Design, develop, and optimize data pipelines and transformations using Palantir Foundry (Code Workbook, Ontology, Objects) and Databricks (PySpark, SQL, Delta Lake). • Implement ETL/ELT workflows, ensuring data quality, lineage, and governance across platforms. • Model ontologies and object structures in Foundry to support operational and analytical use cases. • Collaborate with cross-functional teams to translate business requirements into scalable data solutions. • Automate workflows and CI/CD for data code and Foundry artifacts; manage permissions and operational deployments. • Optimize performance through partitioning, caching, and query tuning in PySpark and Databricks. • Document datasets, transformations, and business logic for transparency and reuse. • Ensure compliance with data security, privacy, and governance standards. ________________________________________ Required Qualifications • 8+ years of experience in data engineering. • Hands-on experience with Palantir Foundry (Code Workbook, Ontology, Objects). • Strong proficiency in PySpark, Python, and SQL. • Experience with Databricks, Delta Lake, and cloud platforms (Azure/AWS). • Solid understanding of ETL/ELT, data modeling, and performance optimization. • • Familiarity with Git, CI/CD, and agile delivery practices.
This job posting was last updated on 11/25/2025