Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
MO

Mondo

via Monster

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Data Engineer - REQ#23157-1

Burbank, CA
contractor
Posted 10/12/2025
Verified Source
Key Skills:
Python
AWS Glue
AWS Lambda
AWS Kinesis
SQL
ETL/ELT pipelines
Data pipeline monitoring
Step Functions
CI/CD workflows
Data governance

Compensation

Salary Range

$104K - 114K a year

Responsibilities

Develop and maintain scalable ETL/ELT data pipelines using AWS services, ensure data quality and governance, collaborate in agile teams, and support data architecture and models.

Requirements

3-5 years experience as Data Engineer or related role with skills in SQL, Python, AWS data services (Glue, Lambda, Kinesis, S3), orchestration tools like Airflow or Step Functions, and CI/CD workflows.

Full Description

Apply now: Data Engineer, Location is Hybrid (Burbank, CA). The start date is September 30, 2025, for this 4-month contract position with potential extension. Job Title: Data Engineer Location-Type: Hybrid (3 days onsite – Burbank, CA) Start Date Is: September 30, 2025 (or 2 weeks from offer) Duration: 4 months (Contract, potential extension) Compensation Range: $50.00 – $55.00/hr W2 Job Description: We are seeking a Data Engineer to join a product-oriented delivery team focused on building scalable, governed, and reusable data pipelines. This role is part of a collaborative pod environment, where engineers, product owners, architects, and analysts work together to deliver integrated and AI-ready data solutions. The Data Engineer will play a key role in implementing ETL/ELT pipelines, enabling data accessibility across applications, and ensuring compliance with governance and security standards. Day-to-Day Responsibilities: • Build & Maintain Pipelines: Develop ETL/ELT jobs and streaming pipelines using AWS services (Glue, Lambda, Kinesis, Step Functions). Write efficient SQL and Python scripts for ingestion, transformation, and enrichment. Monitor pipeline health, troubleshoot issues, and ensure SLA compliance. • Support Data Architecture & Models: Implement physical schemas aligned with canonical and semantic standards. Collaborate with application pods to deliver product-specific pipelines. • Ensure Data Quality & Governance: Apply validation rules, implement monitoring, and surface data quality issues. Tag, document, and register new datasets in the enterprise data catalog. Follow platform security and compliance practices (Lake Formation, IAM). • Collaborate in Agile Pods: Participate in sprint ceremonies, backlog refinement, and design reviews. Work closely with developers, analysts, and data scientists to clarify requirements and unblock dependencies. Promote reuse of pipelines and shared services across pods. Requirements: Must-Haves: • 3–5 years of experience as a Data Engineer or in a related role. • Experience with SQL, Python, and AWS data services (Glue, Lambda, Kinesis, S3). • Familiarity with orchestration tools such as Airflow or Step Functions, and CI/CD workflows. • Problem-solving and debugging skills for pipeline operations. Nice-to-Haves: • Experience optimizing pipelines for both batch and streaming use cases. • Knowledge of data governance practices, including lineage, validation, and cataloging. • Exposure to modern data platforms such as Snowflake, Databricks, Redshift, or Informatica. • Strong collaboration and mentoring skills; ability to influence across pods and domains. Soft Skills: • Collaborative mindset and ability to thrive in agile, cross-functional teams. • Strong communication skills to work with both technical and non-technical stakeholders. About the Company: Mondo

This job posting was last updated on 10/14/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt