via Rippling
$NaNK - NaNK a year
Lead and modernize data processing pipelines, ensuring reliability, quality, and operational excellence.
8+ years in production data systems, 2+ years in leadership, strong Spark and SQL skills, experience with system modernization and operational discipline.
About You You're a hands-on leader who can run a team and still ship. You care about operational reliability, clear ownership boundaries, and measurable throughput. You modernize systems safely, reduce toil, and build habits that make pipelines dependable. About the role This role leads our Data Processing team. You'll own the pipelines and processing jobs that transform data for product and business use cases, with an emphasis on modernization, reliability, and data quality. Strong experience with Databricks (or similar Spark-based platforms) is required. What you'll do Lead the Data Processing team: technical direction, delivery cadence, coaching, and hiring support (as needed). Own core processing systems end-to-end: design, implementation, quality, observability, and on-call readiness. Drive modernization of legacy processing: simplify flows, reduce failure modes, improve performance and cost. Build and operate pipelines (batch and/or streaming) with strong data quality, lineage, and backfill strategies. Establish SLAs/SLOs for key pipelines and improve monitoring, alerting, and incident response. Partner with application engineering, finance, and analytics to prioritize work and deliver dependable outputs. Raise the engineering bar via standards, PR reviews, runbooks, and pragmatic architecture decisions. Qualifications 8+ years building production systems; 2+ years leading engineers or acting as a team tech lead. Strong experience with data processing systems (ETL/ELT, streaming, batch processing, orchestration). Hands-on experience with Databricks and Spark (PySpark/Scala) and strong SQL. Experience modernizing legacy systems safely and incrementally. Operational discipline: monitoring, alerting, incident response, and root cause analysis for pipelines. Ability to drive cross-team alignment and translate requirements into reliable delivery. Nice to have Lakehouse patterns (Delta Lake, Iceberg, or Hudi) and data catalog/governance tools. .Net or Java exprertise Event-driven architectures (Kafka/Kinesis/PubSub) and streaming pipelines. FinTech, payments, billing, invoicing, or other high-volume transactional domains. Experience enabling self-serve analytics for business users. Logistics, cargo, or supply chain experience. Spanish language proficiency. Compensation and benefits We offer competitive pay and benefits designed to fuel our team's success. Health and Wellness: Medical, dental, and vision plans for you and your family Future-Ready: 401(k) with company match Work Life Balance: Generous flexible PTO program and paid holidays Grow With Us: Professional development opportunities #LI-Remote Does this role sound like the next step in your career? We’d love to hear from you! If you don’t meet all of the requirements exactly, we encourage you to use your cover letter to tell us about your unique experience—talent comes from many places, and skills are transferable. Our Commitment to an Extraordinary Work Environment At CargoSprint, we value diversity and inclusivity. We strive to create a welcoming and supportive community for employees from all backgrounds. Regardless of your gender, sexual orientation, physical ability, religion, ethnicity, race, or age, you will find a place where you can thrive and be your authentic self. Our CargoSprint Recruitment Team personally reviews every application.
This job posting was last updated on 1/8/2026