Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
CC

Confidential Company

via Truemote

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

SR Data Engineer (MySQL Aurora to NoSQL MongoDB) - Canada Citizens Only

Anywhere
full-time
Posted 10/13/2025
Verified Source
Key Skills:
Python
MySQL
React.js
RESTful APIs
PostgreSQL
Git
Docker
AWS (basic)
TypeScript
JavaScript (ES6+)
SCSS/Sass
CI/CD concepts

Compensation

Salary Range

$120K - 180K a year

Responsibilities

Lead the modernization of data infrastructure by transitioning from monolithic to microservices architecture, building and optimizing ETL pipelines, designing domain-driven data models, managing SQL and NoSQL databases, and ensuring data governance and compliance.

Requirements

5+ years of data engineering experience with expertise in AWS Glue, Apache Spark/PySpark, dbt, SQL and NoSQL databases (MySQL, Aurora, Redshift, MongoDB), Python programming, data modeling, cloud data architecture, compliance, and DevOps practices; Canadian citizenship required.

Full Description

SR. Data Engineer (Monolith-to-Microservices) SQL: MySQL/Aurora/Redshift --2-- NoSQL: MongoDB/DocDB Fulltime position with Vacation & Holidays & Company Benefits Program – Remote (Not Quebec) – Must be Canadian Citizen The company is striving to be the #1 Software for Fitness & Wellness Businesses. They offer a cloud-based business management software that is used by leading fitness gyms, yoga studios, and wellness centers. The platform is trusted by more than 5,000 businesses and has more than 15 million users. From a bootstrap start-up to one of North America’s fastest-growing SaaS companies, the company helps entrepreneurs grow, manage, and streamline their businesses to drive more revenue with a reliable, intuitive software. The company is experiencing dramatic growth and is building a suite of four Ai modules that will streamline client communications, bookings & payments, growth marketing, and business owner coaching. The company is headquartered in Toronto, Ontario, Canada Monolith-to-Microservices Transition: Databases 1) SQL: MySQL, Aurora, Redshift -and- 2) NoSQL: MongoDB, DocumentDB Tech Stack - LLMs: OpenAI, Gemini • Voice: Twilio, LiveKit, STT/TTS • Backend: Python, Node/TypeScript, REST/GraphQL • Data: Aurora Serverless (MySQL), Redshift, S3/Glue • Infrast: Docker, Kubernetes, GitHub Actions • Mess’g: Twilio, SendGrid, FCM, SES • Integrations: Nuvei/Paragon payments, Salesforce/HubSpot/Zapier. This seasoned Senior Data Engineer will help lead the modernization of our data infrastructure as we transition from a tightly coupled monolithic system to a scalable, microservices-based architecture. This role is central to decoupling legacy database structures, enabling domain-driven service ownership, and powering real-time analytics, operational intelligence, and AI initiatives across our platform. Position will work closely with solution architects and domain owners to design resilient pipelines and data models that reflect business context and support scalable, secure, and auditable data access for internal and external consumers. Key Responsibilities • Monolith-to-Microservices Data Transition: Lead the decomposition of monolithic database structures into domain-aligned schemas that enable service independence and ownership. • Pipeline Development & Migration: Build and optimize ETL/ELT workflows using Python, PySpark/Spark, AWS Glue, and dbt, including schema/data mapping and transformation from on-prem and cloud legacy systems into data lake and warehouse environments. • Domain Data Modeling: Define logical and physical domain-driven data models (star/snowflake schemas, data marts) to serve cross-functional needs, BI, operations, streaming, and ML. • Legacy Systems Integration: Design strategies for extracting, validating, and restructuring data from legacy systems with embedded logic and incomplete normalization. • Database Management: Administer, optimize, and scale SQL (MySQL, Aurora, Redshift) and NoSQL (MongoDB) platforms to meet high-availability and low-latency needs. • Cloud & Serverless ETL: Leverage AWS Glue Catalog, Crawlers, Lambda, and S3 to manage and orchestrate modern, cost-efficient data pipelines. • Data Governance & Compliance: Enforce best practices around cataloging, lineage, retention, access control, and security, ensuring compliance with GDPR, CCPA, PIPEDA, and internal standards. • Monitoring & Optimization: Implement observability (CloudWatch, logs, metrics) and performance tuning across Spark, Glue, and Redshift workloads. • Stakeholder Collaboration: Work with architects, analysts, product managers, and data scientists to define, validate, and prioritize requirements. • Documentation & Mentorship: Maintain technical documentation (data dictionaries, migration guides, schema specs) and mentor junior engineers in engineering standards. REQUIRED Qualifications • 5+ years in data engineering with a proven record in modernizing legacy data systems and driving large-scale migration initiatives. • Cloud ETL Expertise: Proficient in AWS Glue, Apache Spark/PySpark, and modular transformation frameworks like dbt. • Data Modeling: Strong grasp of domain-driven design, bounded contexts, and BI-friendly modeling approaches (star/snowflake/data vault). • Data Migration: Experience with full lifecycle migrations including schema/data mapping, reconciliation, and exception handling. • Databases: 1) SQL: MySQL, Aurora, Redshift -and- 2) NoSQL: MongoDB, DocumentDB • Programming: Strong Python skills for data wrangling, pipeline automation, and API interactions. • Data Architecture: Hands-on with data lakes, warehousing strategies, and hybrid cloud data ecosystems. • Compliance & Security: Track record implementing governance, data cataloging, encryption, retention, lineage, and RBAC. • DevOps Practices: Git, CI/CD pipelines, Docker, and test automation for data pipelines. • Must be Canadian Citizen. REMOTE - But we can NOT hire anyone living in Quebec. Ideal candidate lives near Toronto on EDT. PREFERRED Qualifications • Degreed, and experience with streaming data platforms like Kafka, Kinesis, or CDC tools such as Debezium. • Familiarity with orchestration platforms like Airflow or Prefect • Background in analytics, data modeling for AI/ML pipelines, or ML-ready data preparation • Understanding of cloud-native data services (AWS Glue, Redshift, Snowflake, BigQuery, etc.) • Strong written and verbal communication skills. Self-starter with ability to navigate ambiguity and legacy system complexity. • Exposure to generative AI, LLM fine-tuning, or feature store design is a plus.

This job posting was last updated on 10/14/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt