via Teamtailor
$120K - 200K a year
Support and maintain production data pipelines, troubleshoot issues, and ensure data quality and reliability.
Extensive experience in data engineering, supporting production pipelines, and proficiency in Python, SQL, and orchestration tools.
About Virtasant Virtasant is a global technology services company with a network of over 4,000 technology professionals across 130+ countries. We specialize in cloud architecture, infrastructure, migration, and optimization, helping enterprises scale efficiently while maintaining cost control. Our clients range from Fortune 500 companies to fast-growing startups, relying on us to build high-performance infrastructure, optimize cloud environments, and enable continuous delivery at scale. About the role We're seeking a Senior Data Engineer - Operations to support our data platform, with a strong focus on triaging, debugging, and operating production data pipelines. This role sits within the Data Platform Operations pillar and is responsible for the day-to-day health, reliability, and correctness of ingestion pipelines, transformations, and analytics workflows. You’ll work hands-on across ingestion, orchestration, dbt transformations, and medallion-layer data models, partnering closely with other data and analytic engineers and DevOps to ensure timely resolution of data issues and smooth platform operations. What You’ll Do Operational Enablement & Automation: Build and maintain automation, scripts, and lightweight tooling to support operational workflows, including pipeline triage, data validation, backfills, reprocessing, and quality checks. Improve self-service and reduce manual operational toil. Pipeline Operations & Debugging: Own operational support for ingestion and transformation pipelines built on Airflow, Spark, dbt, Kafka, Snowflake (or similar). Triaging failed jobs, diagnosing data issues, performing backfills, and coordinating fixes across ingestion, transformation, and analytics layers. Observability, Data Quality & Incident Response: Monitor pipeline health, data freshness, and quality metrics across medallion layers. Investigate data anomalies, schema drift, and transformation failures, and drive incidents to resolution through root-cause analysis and corrective actions. Cross-Functional Operations: Act as the primary interface between Data Platform, Analytics Engineering, and downstream consumers during operational issues. Communicate impact, coordinate fixes, and ensure timely resolution of data incidents. What We’re Looking For You must live in the contiguous United States and have all the necessary documentation to work under an independent contractor agreement. We cannot offer sponsorships or sponsorship transfers, so unfortunately, we CANNOT consider candidates on H1B, OPT, EAD, or CPT visas. Must-Have Experience 7+ years of experience in data engineering, analytics engineering, or software development, with significant experience operating and supporting production data pipelines Strong programming skills in Python & SQL on at least one major data platform (Snowflake, BigQuery, Redshift, or similar) Experience supporting schema evolution, data contracts, and downstream consumers in production environments Strong experience triaging, debugging, and maintaining dbt models, including understanding dependencies across medallion layers (bronze/silver/gold) Experience with streaming, distributed compute, or S3-based table formats (Spark, Kafka, Iceberg/Delta/Hudi). Experience with schema governance, metadata systems, and data quality frameworks. Hands-on experience operating and debugging orchestration workflows (Airflow, Dagster, Prefect), including retries, backfills, and dependency management Solid grasp of CI/CD, Docker, and 2 years of experience in AWS Preferred / Nice-to-haves Experience participating in on-call rotations, incident response, or data operations teams Experience with data observability, data catalog, or metadata management tools Experience working with healthcare data (X12, FHIR) Understanding of authentication/authorization (OAuth2, JWT, SSO) Why This Role is Exciting This is a very fast-paced, high-pressure role, where you can learn a lot. If this makes your eyes sparkle, then please apply! If you can build tenure in this role, the potential is endless. You'll be an SME in many key areas for our client, and you'll work with Analytics Engineers and with downstream reporting tools. Our recruitment process Recruiter interview (30 min) Technical Interview (45 min) Screening interview with the client's hiring manager (30 min) Client technical interview (45 min) We strive to move efficiently from step to step so that the recruitment process can be as fast as possible. What we offer Totally remote within the contiguous United States, full-time (40h/week) Stable, long-term independent contract agreement Work hours - US Eastern time office hours
This job posting was last updated on 1/13/2026