Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Luma AI

Luma AI

via Ashby

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Senior MLOps Engineer - Production

Anywhere
full-time
Posted 8/20/2025
Direct Apply
Key Skills:
Python
Kubernetes
Docker
Redis
S3-compatible Storage
Pytorch
CUDA
Ffmpeg

Compensation

Salary Range

$Not specified

Responsibilities

Ship new model architectures by integrating them into the inference engine and empower the product team to create groundbreaking features through user-friendly APIs. Build sophisticated scheduling systems to optimally leverage GPU resources while maintaining CI/CD pipelines for model processing and internal tooling.

Requirements

Strong generalist Python skills and extensive experience with Kubernetes and Docker are required. Experience with high performance large scale ML systems and multimedia processing is a plus.

Full Description

Luma’s mission is to build multimodal AI to expand human imagination and capabilities. We believe that multimodality is critical for intelligence. To go beyond language models and build more aware, capable and useful systems, the next step function change will come from vision. So, we are working on training and scaling up multimodal foundation models for systems that can see and understand, show and explain, and eventually interact with our world to effect change. Role & Responsibilities Ship new model architectures by integrating them into our inference engine Empower our product team to create groundbreaking features by developing user-friendly APIs and interaction patterns Build sophisticated scheduling systems to optimally leverage our expensive GPU resources while meeting internal SLOs Build and maintain CI/CD pipelines for processing/optimizing model checkpoints, platform components, and SDKs for internal teams to integrate into our products/internal tooling. Background Strong generalist Python skills Experience with queues, scheduling, traffic-control, fleet management at scale. Extensive experience with Kubernetes and Docker. Bonus points if you have experience with high performance large scale ML systems (>100 GPUs) and/or Pytorch experience. Bonus points if you have experience with ffmpeg and multimedia processing. Tech stack Must have Python Kubernetes Redis S3-compatible Storage Nice to have Pytorch CUDA Ffmpeg

This job posting was last updated on 8/21/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt