Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
LM Studio

LM Studio

via Ashby

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

C++ Systems Engineer

Anywhere
full-time
Posted 9/25/2025
Direct Apply
Key Skills:
C++
Systems Software
Concurrency
Memory Management
File Systems
Network Protocols
Performance Optimization
API Design
Debugging
Profiling
Resource Management
Scheduling
IPC
Build Infrastructure
User Experience
LLM Integration

Compensation

Salary Range

$Not specified

Responsibilities

You will design, build, and optimize the core native runtime that powers LM Studio and the C++ libraries. Your work will involve system and library integration, implementing system-level code, and profiling and tuning execution paths for local AI software.

Requirements

Candidates should have 4+ years of experience building production C++ systems software across macOS or Linux. A deep understanding of concurrency, memory management, and performance optimization is essential.

Full Description

Element Labs We aim to build delightful and potent creation tools for AI. We are a small team based in New York. Everyone on the team is IC-minded, intellectually curious, self-motivated, and loves software. We care deeply about our user community and we strive to build canonical software that users and developers love. Our products include the LM Studio desktop app, our developer SDKs: lmstudio-js and lmstudio-python, our CLI lms, MLX engine mlx-engine for M-chip Macs, venvstacks which enables us to ship Python-based software, the collaboration Hub for individuals and teams, and more currently being built. The Role We are hiring a C++ Systems Software Engineer in New York City. You design, build, and optimize the core native runtime that powers LM Studio and the C++ libraries powering the app and our APIs. You will work across our runtime, LLM engines, llama.cpp integrations, build infrastructure, and the future of our on-device AI software. Your work centers on system and library integration: wiring our C++ runtime to GPU backends, vendor SDKs, and operating-system services to support user-facing applications. You will implement and harden system-level code (threading, memory, files, IPC, scheduling) and integrate platform acceleration paths (Metal, CUDA, Vulkan) across macOS, Windows, and Linux. You will profile, debug, and tune the execution paths that make local AI fast and dependable, and our software well architected and maintainable. Responsibilities - Contribute to the C++ runtime that powers LM Studio - Extend our LLM engine integrations (including llama.cpp) and build platform-aware performance features for desktop operating systems. - Implement resilient IPC, resource management, and scheduling logic to support concurrent model execution. - Improve our build, packaging, and release infrastructure for native components. - Collaborate with the rest of the team to deliver cohesive and recognizable user experiences. Qualifications - 4+ years building production C++ systems software across macOS or Linux. - Thinks in systems and knows how to reason about performance, reliability, and user experience end-to-end. - Proven maturity designing internal and external APIs that are ergonomic, maintainable, and stable over time. - C++11 (or newer) expertise with RAII as a default mindset and modern language/library proficiency. - Deep knowledge of concurrency, memory management, file systems, and network protocols. - Experience optimizing performance with profilers, tracing, and hardware counters.

This job posting was last updated on 9/26/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt