Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
CargoSprint

CargoSprint

via Remote Rocketship

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Lead Engineer, Data Processing – Databricks

Anywhere
Full-time
Posted 1/6/2026
Verified Source
Key Skills:
Data processing systems (ETL/ELT, streaming, batch processing)
SQL
Python (PySpark)
System modernization
Operational discipline (monitoring, alerting, incident response)

Compensation

Salary Range

$120K - 200K a year

Responsibilities

Lead the data processing team, own core systems end-to-end, modernize legacy systems, and establish operational standards.

Requirements

8+ years in production systems, 2+ years in leadership, strong experience with data pipelines, SQL, and modern data architectures.

Full Description

Job Description: • Lead the Data Processing team: technical direction, delivery cadence, coaching, and hiring support (as needed) • Own core processing systems end-to-end: design, implementation, quality, observability, and on-call readiness • Drive modernization of legacy processing: simplify flows, reduce failure modes, improve performance and cost • Build and operate pipelines (batch and/or streaming) with strong data quality, lineage, and backfill strategies • Establish SLAs/SLOs for key pipelines and improve monitoring, alerting, and incident response • Partner with application engineering, finance, and analytics to prioritize work and deliver dependable outputs • Raise the engineering bar via standards, PR reviews, runbooks, and pragmatic architecture decisions Requirements: • 8+ years building production systems; 2+ years leading engineers or acting as a team tech lead • Strong experience with data processing systems (ETL/ELT, streaming, batch processing, orchestration) • Hands-on experience with Databricks and Spark (PySpark/Scala) and strong SQL • Experience modernizing legacy systems safely and incrementally • Operational discipline: monitoring, alerting, incident response, and root cause analysis for pipelines • Ability to drive cross-team alignment and translate requirements into reliable delivery • Nice to have: Lakehouse patterns (Delta Lake, Iceberg, or Hudi) and data catalog/governance tools • .Net or Java expertise • Event-driven architectures (Kafka/Kinesis/PubSub) and streaming pipelines • FinTech, payments, billing, invoicing, or other high-volume transactional domains • Experience enabling self-serve analytics for business users • Logistics, cargo, or supply chain experience • Spanish language proficiency. Benefits: • Health and Wellness: Medical, dental, and vision plans for you and your family • Future-Ready: 401(k) with company match • Work Life Balance: Generous flexible PTO program and paid holidays • Grow With Us: Professional development opportunities

This job posting was last updated on 1/8/2026

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt