Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Softcom Systems Inc

Softcom Systems Inc

via LinkedIn

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Palantir Foundry Data engineer :: W2_Contract ::Remote

Anywhere
contractor
Posted 11/24/2025
Verified Source
Key Skills:
Palantir Foundry
PySpark
Python
SQL
Databricks
Delta Lake
ETL/ELT
Data Modeling
CI/CD
Cloud Platforms (Azure/AWS)

Compensation

Salary Range

$150K - 200K a year

Responsibilities

Design and optimize scalable data pipelines and transformations using Palantir Foundry and Databricks technologies, ensuring data quality and governance.

Requirements

8+ years data engineering experience with hands-on skills in Palantir Foundry, PySpark, Python, SQL, Databricks, cloud platforms, and CI/CD practices.

Full Description

Detailed Job Description: We are looking for a versatile Data Engineer with strong experience in Palantir Foundry and modern data engineering tools such as Databricks, PySpark, and Python. This role involves designing and building scalable data pipelines, managing transformations, and enabling analytics and operational workflows across enterprise platforms. You will work closely with business stakeholders, data scientists, and product teams to deliver high-quality, governed, and reusable data assets that power decision-making and advanced analytics. Minimum years of experience*: >10 years Key Responsibilities • Design, develop, and optimize data pipelines and transformations using Palantir Foundry (Code Workbook, Ontology, Objects) and Databricks (PySpark, SQL, Delta Lake). • Implement ETL/ELT workflows, ensuring data quality, lineage, and governance across platforms. • Model ontologies and object structures in Foundry to support operational and analytical use cases. • Collaborate with cross-functional teams to translate business requirements into scalable data solutions. • Automate workflows and CI/CD for data code and Foundry artifacts; manage permissions and operational deployments. • Optimize performance through partitioning, caching, and query tuning in PySpark and Databricks. • Document datasets, transformations, and business logic for transparency and reuse. • Ensure compliance with data security, privacy, and governance standards. ________________________________________ Required Qualifications • 8+ years of experience in data engineering. • Hands-on experience with Palantir Foundry (Code Workbook, Ontology, Objects). • Strong proficiency in PySpark, Python, and SQL. • Experience with Databricks, Delta Lake, and cloud platforms (Azure/AWS). • Solid understanding of ETL/ELT, data modeling, and performance optimization. • • Familiarity with Git, CI/CD, and agile delivery practices.

This job posting was last updated on 11/25/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt