Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
d-Matrix

d-Matrix

via Ashby

Apply Now
All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Principal Software Engineer - R&D

Anywhere
full-time
Posted 7/29/2025
Direct Apply
Key Skills:
Generative AI
Software Development
Kernel Code Generation
Machine Learning
Computer Architecture
C/C++
Python
Linux
Data Structures
System Software
Technical Leadership
Algorithm Design
Embedded Systems
ML Frameworks
CUDA
DSPs
FPGAs

Compensation

Salary Range

$Not specified

Responsibilities

The principal engineer will design the software stack for the AI compute engine and lead the R&D of LLM-based kernel code generation. This includes implementing operations for large language and multimodal models and integrating them into the software stack.

Requirements

Candidates should have a strong grasp of computer architecture and experience in tuning LLMs for code generation. A Master's or PhD in a related field and proficiency in C/C++ and Python are required.

Full Description

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI. Location: Hybrid, working onsite at our Santa Clara, CA, headquarters 3-5 days per week. The Role: Principal Software Engineer - R&D What you will do: The principal engineer role requires you to be part of the team that helps design the SW stack for our AI compute engine. As part of the software team, you will lead the research and development of LLM-based kernel code generation for the software kernel SDK for next-generation AI hardware. The d-Matrix software stack is a hybrid software stack that utilizes a compiler as well as kernels. The kernels team designs and implements operations for large language and multimodal models, such as SIMD operations, matrix multiplications, and convolution operations, and integrates these operations to build kernels such as LayerNorms, convolution layers, attention heads, or KV caches. These kernels are implemented in a combination of the d-Matrix HW ISA and/or ISAs for third-party IP-based processor units. What you will bring: Experience tuning LLMs to generate code. You have exposure to building software kernels for HW architectures. You possess an understanding of domain-specific hardware architectures (for example, GPUs, ML accelerators, SIMD vector processors, and DSPs) and how to map ML algorithms, such as nonlinear operations or complex data manipulation operations, to an accelerator architecture. You understand how to map computational graphs generated by AI frameworks (such as PyTorch or TensorFlow) to an underlying architecture. You also understand how to evaluate throughput and latency performance for such accelerators, as well as how to modify the algorithms for numerical accuracy. Your role will be to research and develop ways of generating kernel code through LLMs. Minimum: MS or PhD in Computer Science, Electrical Engineering, or related fields Strong grasp of computer architecture, data structures, system software, and machine learning fundamentals Experience as technical R&D lead, manager, or senior manager level with software for AI accelerator HW and models for code generation Experience in designing and fine-tuning generative AI LLM models for code generation and/or coding assistance with a record of open-source code and/or publications in this field. Proficient in C/C++ and Python development in Linux environment and using standard development tools Self-motivated team player with a strong sense of ownership and leadership Preferred: Prior startup, small team, or incubation experience Experience design and implementing algorithms for specialized hardware such as FPGAs, DSPs, GPUs, AI accelerators using libraries such as CUDA, etc. Experience with development for embedded SIMD vector processors such as Tensilica Experience with ML frameworks such as TensorFlow and/or PyTorch Experience working with ML compilers and algorithms, such as MLIR, LLVM, TVM, Glow, etc. Work experience at a cloud provider or AI compute/subsystem company Equal Opportunity Employment Policy d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day. d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

This job posting was last updated on 7/30/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt