Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Optomi

Optomi

via LinkedIn

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Senior AI Quality Engineer - (LLM / Agentic Systems) - 100% Remote

Anywhere
Full-time
Posted 3/9/2026
Verified Source
Key Skills:
test automation
software quality engineering
Python programming

Compensation

Salary Range

$85K - 130K a year

Responsibilities

Design and implement testing frameworks and validate AI-driven systems for reliability and performance.

Requirements

7+ years in software quality engineering with 2+ years in AI/ML systems, strong programming skills, and experience with distributed systems and CI/CD.

Full Description

Senior AI Quality Engineer - (LLM / Agentic Systems) - 100% Remote Optomi, in partnership with a major leader in the airline and travel technology industry, is seeking an Agentic QA Engineer to help ensure the reliability, accuracy, and scalability of next-generation AI systems powering modern software delivery and operational workflows. This role will focus on testing and validating generative AI and agent-based systems, including complex multi-agent architectures responsible for automation, decision-making, and workflow orchestration. The ideal candidate will design and implement end-to-end testing strategies, build reusable test frameworks, and validate the performance and resilience of AI-driven systems operating in production environments. You will collaborate closely with AI engineers, platform engineers, MLOps teams, and operations leaders to ensure agentic systems operate reliably at scale while meeting strict performance, safety, and compliance requirements. What the Right Professional Will Enjoy! • The opportunity to work on cutting-edge AI and multi-agent systems that automate and enhance complex enterprise workflows • Partnering with AI engineers, data scientists, and platform teams to bring generative AI systems from development into production • Designing testing frameworks for next-generation autonomous systems, including planner-executor models and multi-agent orchestration • Building evaluation pipelines that measure accuracy, reliability, safety, and cost performance of AI-driven applications • Working within a highly collaborative engineering environment focused on innovation, scalability, and operational excellence Apply Today If Your Background Includes • 7+ years of experience in software quality engineering or test automation, including experience designing testing frameworks • 2+ years of experience working with AI/ML systems, generative AI applications, or LLM-based platforms • Strong programming experience with Python, TypeScript, or JavaScript for building test harnesses and automation frameworks • Experience evaluating LLM outputs using techniques such as semantic similarity, embeddings, or traditional NLP evaluation metrics • Background testing distributed systems, including resiliency, latency profiling, and fault tolerance • Familiarity with agent orchestration frameworks such as LangChain, LangGraph, LlamaIndex, DSPy, or similar tooling • Experience working with CI/CD pipelines and modern observability platforms (Datadog, Prometheus, OpenTelemetry, Grafana) • Understanding of security, safety, and compliance considerations for AI systems, including PII handling and model guardrails

This job posting was last updated on 3/10/2026

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt