Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
TP

The Public Interest Company

via Ashby

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Data Engineer

Anywhere
Full-time
Posted 2/13/2026
Direct Apply
Key Skills:
SQL
Python
ETL/ELT design

Compensation

Salary Range

$70K - 90K a year

Responsibilities

Design, build, and maintain healthcare data pipelines, ensuring data quality and collaborating with cross-functional teams.

Requirements

1-3 years of data engineering experience, proficiency in SQL and Python, foundational healthcare data knowledge, and familiarity with cloud platforms like Databricks and Snowflake.

Full Description

About the Company The Public Interest Company is a comprehensive solution for identifying and recovering third party liability for health plans, risk-bearing provider groups, and self-funded employers. We combine AI and a legal tech platform to identify and recover claims that should have been paid by third party payers. We handle the entire recovery process, all while protecting members and streamlining operations. We empower healthcare organizations to maximize their financial performance, and recoup dollars towards care delivery, without adding administrative burden. Role Overview In this role, you’ll support the ingestion and transformation of raw healthcare claims data into standardized, actionable datasets that help identify and recover third-party liability opportunities. You’ll contribute to the design, development, and maintenance of ETL pipelines, while ensuring the accuracy and reliability of the resulting data. The ideal candidate has a foundational understanding of healthcare data (claims-specific experience is a plus) and is familiar with common data engineering principles. You should be comfortable working in a fast-paced, startup environment, collaborating with team members from across the business, and willing to contribute to a wide range of data-related projects. All downstream business operations depend on high-quality data. In this role, you will be a critical contributor to ensuring that data is accurate, timely, and actionable. Responsibilities Data pipelines: Design, build, and maintain data pipelines using SQL, Python, dbt and Databricks. Data validation: Conduct regular data quality checks to ensure accuracy and integrity for downstream users. Data troubleshooting: Proactively identify data issues and develop custom solutions. Process Optimization: Monitor pipeline performance and tune processes to improve efficiency. Cross-functional collaboration: Work with cross-functional teams to understand data requirements and incorporate them into the data framework. Team engagement: Actively participate in discussions, offering input and feedback to drive continuous improvement. Qualifications 1–3 years of experience in data engineering, analytics, or related technical roles. Solid understanding of ETL/ELT concepts and data pipeline design. Proficiency in SQL; Python experience is a plus. Foundational understanding of healthcare data; experience with claims data is a plus. Familiarity with modern cloud data platforms (e.g., Databricks, Snowflake). Strong analytical and problem-solving skills with attention to detail. Ability to work independently while collaborating effectively within a team. Previous startup experience is a plus.

This job posting was last updated on 2/16/2026

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt