Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
NV

NVIDIA

via Workday

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Senior Software Engineer - Distributed Inference

Anywhere
full-time
Posted 8/26/2025
Direct Apply
Key Skills:
Rust
C++
Python
Distributed systems
Kubernetes
Slurm
GPU inference
CUDA
Cluster-scale services

Compensation

Salary Range

$184K - 357K a year

Responsibilities

Build and maintain distributed model management systems and inference scheduling solutions on Kubernetes and Slurm for large-scale AI workloads.

Requirements

6+ years professional systems software development experience with strong Rust programming skills, deep knowledge of distributed systems, Kubernetes, and cluster orchestration.

Full Description

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We are now looking for a Senior System Software Engineer to work on user facing tools for Dynamo Inference Server! NVIDIA is hiring software engineers for its GPU-accelerated deep learning software team, and we are a remote friendly work environment. Academic and commercial groups around the world are using GPUs to power a revolution in deep learning, enabling breakthroughs in problems from LLM, image classification to speech recognition to natural language processing. We are a fast-paced team building tools and software to make the design and deployment of new deep learning models easier and accessible to more data scientists. What you’ll be doing: Build and maintain distributed model management systems, including Rust-based runtime components, for large-scale AI inference workloads. Implement inference scheduling and deployment solutions on Kubernetes and Slurm, while driving advances in scaling, orchestration, and resource management. Collaborate with infrastructure engineers and researchers to develop scalable APIs, services, and end-to-end inference workflows. Create monitoring, benchmarking, automation, and documentation processes to ensure low-latency, robust, and production-ready inference systems on GPU clusters. What we need to see: Bachelor’s, Master’s, or PhD in Computer Science, ECE, or related field (or equivalent experience). 6+ years of professional systems software development experience. Strong programming expertise in Rust (with C++, Python as a plus). Deep knowledge of distributed systems, runtime orchestration, and cluster-scale services. Hands-on experience with Kubernetes, container-based microservices, and integration with Slurm. Proven ability to excel in fast-paced R&D environments and collaborate across functions. Ways to stand out from the crowd: Experience with inference-serving frameworks (e.g., Dynamo Inference Server, TensorRT, ONNX Runtime) and deploying/managing LLM inference pipelines at scale. Contributions to large-scale, low-latency distributed systems (open-source preferred) with proven expertise in high-availability infrastructure. Strong background in GPU inference performance tuning, CUDA-based systems, and operating across cloud-native and hybrid environments (AWS, GCP, Azure). NVIDIA has continuously reinvented itself over three decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. We are widely considered to be the leader of AI computing, and one of the technology world’s most desirable employers. We have some of the most forward-thinking and committed people in the world working for us. If you're creative and autonomous, we want to hear from you! Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until August 30, 2025. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA.

This job posting was last updated on 9/3/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt