Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
d-Matrix

d-Matrix

via Jobs

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

AI Software Applications Engineer

Anywhere
Full-time
Posted 12/11/2025
Verified Source
Key Skills:
AI/ML infrastructure
generative AI inference
performance optimization
hardware accelerators (GPUs, TPUs)
deployment and troubleshooting of AI models

Compensation

Salary Range

$0K - 0K a year

Responsibilities

Support and optimize generative AI inference solutions, troubleshoot hardware and software issues, and collaborate with engineering teams.

Requirements

10+ years in AI/ML infrastructure, expertise in AI hardware accelerators, experience with AI frameworks, and customer-facing support skills.

Full Description

At d-Matrix, we are focused on unleashing the potential of generative AI to power the transformation of technology. We are at the forefront of software and hardware innovation, pushing the boundaries of what is possible. Our culture is one of respect and collaboration. We value humility and believe in direct communication. Our team is inclusive, and our differing perspectives allow for better solutions. We are seeking individuals passionate about tackling challenges and are driven by execution. Ready to come find your playground? Together, we can help shape the endless possibilities of AI. Location Hybrid, working onsite at our Santa Clara, CA headquarters 3 days per week. AI Software Applications Engineering d-Matrix is seeking an experienced AI Applications Engineer to drive the successful deployment and support of d-Matrix’s cutting-edge AI products and solutions, specifically in the realm of generative AI inference and AI/ML software support. In this highly technical role, you will work closely with customers and internal teams to resolve complex software, hardware, and firmware challenges related to AI workloads. The ideal candidate will have expertise in AI/ML infrastructure, with a focus on inference solutions and performance optimization for data center environments. This position requires a strong blend of engineering acumen and customer-facing skills to ensure the seamless deployment and continued success of our products. Responsibilities: • Provide expert guidance and support to customers deploying generative AI inference models, including assisting with integration, troubleshooting, and optimizing AI/ML software stacks on d-Matrix hardware. • Perform functional and performance validation testing, ensuring that AI models run efficiently and meet customer expectations. • Evaluate throughput and latency performance for d-Matrix accelerators, profile workloads to identify bottlenecks, and optimize performance (including quantization, custom kernel development and so on) • Collaborate with internal engineering and product teams to produce developer guides, technical notes, and other supporting materials that ease the adoption of d-Matrix AI/ML solutions Required Qualifications • Bachelors or Masters degree in Electrical Engineering, Computer Engineering, Computer Science, or related field with 10+ years of experience • In-depth knowledge and hands-on experience with generative AI inference at scale, including the integration and deployment of AI models in production environments. • Experience with automation tools and scripting languages (Linux or Windows shell scripting, Python, Go) to streamline deployment, monitoring, and issue resolution processes. • Hands-on experience with AI/ML infrastructure accelerators (e.g., GPUs, TPUs) and expertise in optimizing performance for generative AI inference workloads. • Ability to communicate complex technical concepts to diverse audiences, from developers to business stakeholders. Preferred Qualifications • Prior experience in customer facing roles for enterprise-level AI and datacenter products, with a focus on AI/ML software and generative AI inference with GPUs or accelerators. • Understanding of domain-specific hardware architectures (for example, GPUs, ML accelerators, SIMD vector processors, and DSPs) and how to map ML algorithms to an accelerator architecture. • Strong analytical skills with a proven track record of solving complex problems in AI/ML systems, including performance optimization and troubleshooting in AI/ML frameworks. • Extensive experience with the deployment of AI/ML frameworks such as PyTorch, OpenAI Triton, VLLM, and familiarity with container orchestration platforms like Kubernetes. • Excellent communication and presentation skills, with a demonstrated ability to guide customers through complex AI/ML system integration and troubleshooting. Equal Opportunity Employment Policy d-Matrix is proud to be an equal opportunity workplace and affirmative action employer. We’re committed to fostering an inclusive environment where everyone feels welcomed and empowered to do their best work. We hire the best talent for our teams, regardless of race, religion, color, age, disability, sex, gender identity, sexual orientation, ancestry, genetic information, marital status, national origin, political affiliation, or veteran status. Our focus is on hiring teammates with humble expertise, kindness, dedication and a willingness to embrace challenges and learn together every day. d-Matrix does not accept resumes or candidate submissions from external agencies. We appreciate the interest and effort of recruitment firms, but we kindly request that individual interested in opportunities with d-Matrix apply directly through our official channels. This approach allows us to streamline our hiring processes and maintain a consistent and fair evaluation of al applicants. Thank you for your understanding and cooperation.

This job posting was last updated on 12/15/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt