Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
NVIDIA

NVIDIA

via LinkedIn

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Senior AI System Engineer

Hillsboro, OR
full-time
Posted 9/20/2025
Verified Source
Key Skills:
AI/ML system evaluation
Performance modeling
Python
C++
GPU computing (CUDA)
Deep learning frameworks (PyTorch, TRT-LLM, VLLM)
Computer architecture
Statistical performance analysis

Compensation

Salary Range

$148K - 288K a year

Responsibilities

Optimize AI inference deployment at datacenter scale by developing performance models and collaborating across research, software, and hardware teams.

Requirements

Master's degree or equivalent, 3+ years in AI/ML workload performance analysis, strong computer architecture and ML fundamentals, proficiency in Python and CUDA, and experience with deep learning frameworks.

Full Description

At NVIDIA, we are at the forefront of advancing the capabilities of artificial intelligence. We are seeking an ambitious and forward-thinking AI/ML System Performance Engineer to contribute to the development of next-generation inference optimizations and deliver industry-leading performance. In this role, you will investigate and prototype scalable inference strategies—driving down per-token latency and maximizing system throughput by applying cross-stack optimizations that span algorithmic innovations (e.g., attention variants, speculative decoding, inference-time scaling), system-level techniques (e.g., model sharding, pipelining, communication overlap), and hardware-level enhancements. As NVIDIA makes significant strides in AI datacenters, our team holds a central role in maximizing the efficiency of our exponentially growing inference deployment needs and establishing a data-driven approach to algorithmic improvements, hardware design, and system software development. We collaborate extensively with teams across deep learning research, framework development, compiler and systems engineering, and silicon architecture. Thriving in this high-impact, interdisciplinary environment demands not only technical proficiency but also a growth mindset and a pragmatic attitude—qualities that fuel our collective success in shaping the future of datacenter technology. Sample projects include Helix Parallelism and Disaggregated Inference . What You’ll Be Doing • Optimize inference deployment by pushing the Pareto frontier of Accuracy, Throughput and Interactivity at datacenter scale • Develop high-fidelity performance models to prototype emerging algorithmic techniques & hardware optimizations to drive model-hardware co-design for Generative AI. • Prioritize features to guide future software and hardware roadmap based on detailed performance modeling and analysis • Model end-to-end performance impact of emerging GenAI workflows - such as Agentic Pipelines, Inference-time compute scaling, etc. – to understand future datacenter needs • This position requires you to keep up with the latest DL research and collaborate with diverse teams, including DL researchers, hardware architects, and software engineers. What We Need To See • A Master's degree (or equivalent experience) in Computer Science, Electrical Engineering or related fields. • 3+ years of hands-on experience in system evaluation of AI/ML workloads or performance analysis, modeling and optimizations for AI • Strong background in computer architecture, roofline modeling, queuing theory and statistical performance analysis techniques. • Solid understanding of ML fundamentals, model parallelism and inference serving techniques. • Proficiency in Python (and optionally C++) for simulator design and data analysis. • Experience with GPU computing (CUDA) • Experience with deep learning frameworks like PyTorch, TRT-LLM, VLLM, SGLang • Growth mindset and pragmatic “measure, iterate, deliver” approach. Ways To Stand Out From The Crowd • Comfortable defining metrics, designing experiments and visualizing large performance datasets to identify resource bottlenecks. • Proven track record of working in cross-functional teams, spanning algorithms, software and hardware architecture. • Ability to distill complex analyses into clear recommendations for both technical and non-technical stakeholders. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you! Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 148,000 USD - 235,750 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits . Applications for this job will be accepted at least until September 1, 2025.NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. JR2002989

This job posting was last updated on 9/23/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt