Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
HeyGen

HeyGen

via LinkedIn

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Software Engineer, AI Compute Infrastructure

Los Angeles, CA
Full-time
Posted 12/3/2025
Verified Source
Key Skills:
Kubernetes
Distributed Systems
Go
Microservices
Cloud Infrastructure
Kafka
PostgreSQL
Docker
CI/CD
OAuth2
Prometheus
OpenTelemetry

Compensation

Salary Range

$120K - 160K a year

Responsibilities

Build and optimize scalable AI infrastructure for generative video models including GPU utilization, distributed job management, observability, and cloud orchestration.

Requirements

5+ years in AI infrastructure or HPC, proficiency in Python and C++, experience with Kubernetes, Ray, ML frameworks like PyTorch, and GPU acceleration technologies.

Full Description

About HeyGen At HeyGen, our mission is to make visual storytelling accessible to all. Over the last decade, visual content has become the preferred method of information creation, consumption, and retention. But the ability to create such content, in particular videos, continues to be costly and challenging to scale. Our ambition is to build technology that equips more people with the power to reach, captivate, and inspire audiences. Learn more at www.heygen.com. Visit our Mission and Culture doc here. We are seeking a seasoned Software Engineer to build and scale the foundational compute infrastructure that powers our state-of-the-art AI models—from multimodal training data pipelines to high-throughput, low-latency video generation. Responsibilities You will be the core engineer responsible for building the robust, efficient, and scalable platform that enables our research and production teams to rapidly iterate on HeyGen's generative video models. Your contributions will directly impact model performance, developer productivity, and the final quality of every AI-generated video. • Optimize GPU Utilization: Design and implement mechanisms to aggressively optimize GPU and cluster utilization across thousands of devices for inference, training, data processing and large-scale deployment of our state-of-art video generation models. • Develop Large-Scale AI Job Framework: Build highly scalable, reliable frameworks for launching and managing massive, heterogeneous compute jobs, including multi-modal high-volume data ingestion/processing, distributed model training, and continuous evaluation/benchmarking. • Enhance Observability: Develop world-class observability, tracing, and visualization tools for our compute cluster to ensure reliability, diagnose performance bottlenecks (e.g., memory, bandwidth, communication). • Accelerate Pipelines: Collaborate closely with AI researchers and AI engineers to integrate innovative acceleration techniques (e.g., custom CUDA kernels, distributed training libraries) into production-ready, scalable training and inference pipelines. • Infrastructure Management: Champion the adoption and optimization of modern cloud and container technologies (Kubernetes, Ray) for elastic, cost-efficient scaling of our distributed systems. Minimum Requirements We are looking for a highly motivated engineer with deep experience operating and optimizing AI infrastructure at scale. • Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. • 5+ years of full-time industry experience in large-scale MLOps, AI infrastructure, or HPC systems. • Experience with data frameworks and standards like Ray, Apache Spark, LanceDB • Strong proficiency in Python and a high-performance language such as C++ for developing core infrastructure components. • Deep understanding and hands-on experience with modern orchestration and distributed computing frameworks such as Kubernetes and Ray. • Experience with core ML frameworks such as PyTorch, TensorFlow, or JAX. Preferred Qualifications • Master's or PhD in Computer Science or a related technical field. • Demonstrated Tech Lead experience, driving projects from conceptual design through to production deployment across cross-functional teams. • Prior experience building infrastructure specifically for Generative AI models (e.g., diffusion models, GANs, or large language models) where cost and latency are critical. • Proven background in building and operating large-scale data infrastructure (e.g., Ray, Apache Spark) to manage petabytes of multi-modal data (video, audio, text). • Expertise in GPU acceleration and deep familiarity with low-level compute programming, including CUDA, NCCL, or similar technologies for efficient inter-GPU communication. What HeyGen Offers • Competitive salary and benefits package. • Dynamic and inclusive work environment. • Opportunities for professional growth and advancement. • Collaborative culture that values innovation and creativity. • Access to the latest technologies and tools. HeyGen is an Equal Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

This job posting was last updated on 12/5/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt