Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
RI

Raynmaker Inc

via Breezy

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Senior Data / ML / AI Engineer

Anywhere
Full-time
Posted 12/15/2025
Direct Apply
Key Skills:
ML Engineering
Large-scale Data Pipelines
LLMs and RAG systems
Python
Distributed Systems

Compensation

Salary Range

$120K - 200K a year

Responsibilities

Design and optimize AI and ML systems, including LLM deployment, data pipelines, and real-time decision-making.

Requirements

7+ years of ML engineering experience, expertise in Python, vector databases, reinforcement learning, and large-scale ML system deployment.

Full Description

About Raynmaker Raynmaker.ai is the AI-native sales engine purpose-built for small and mid-sized businesses. We empower local and franchise businesses to compete with enterprise-level capabilities—through AI-driven lead targeting, next-best-action automation, and intuitive workflows that help them close more deals, faster. We're a venture-backed, fast-growing team committed to helping SMBs grow with confidence. Role Overview We’re seeking a Senior Data / ML / AI Engineer to architect and build the intelligence layer of our autonomous sales platform. This role is responsible for designing, implementing, and optimizing the ML, LLM, scoring, retrieval, and agent-based systems that power live customer interactions and real business outcomes. You will work closely with technology leadership to convert AI concepts into scalable, production-grade systems — including RAG pipelines, reinforcement-learning-based decision systems, vectorized knowledge bases, custom LLM deployments, real-time streaming inference, and multi-tenant data pipelines. If you are a senior engineer who can bridge ML science, distributed systems, and pragmatic productionization, this role will put you at the core of a first-of-its-kind AI-native platform. Key Responsibilities LLM, RAG & Agent Systems Design, develop, and optimize RAG pipelines with high-performance vector databases (Milvus, Zilliz, Pinecone, Weaviate). Build scoring, ranking, and predictive models that drive real-time decision-making for sales and customer interactions. Develop and refine agent-driven architectures, including tool calling, memory management, and multi-step reasoning flows. Deploy, fine-tune, and optimize custom LLMs, ensuring cost efficiency and performance at scale. Enrich internal knowledge bases and embeddings using advanced ML techniques. Machine Learning Engineering & Data Infrastructure Build large-scale data ingestion, transformation, and real-time streaming pipelines for model training and inference. Implement reinforcement learning systems that improve agent behaviors over time. Own ML model lifecycle: development, evaluation, deployment, optimization, and monitoring. Drive LLM cost optimization, including token efficiency, caching, and inference routing. Production Systems & Platform Integration Architect and maintain microservices exposing ML/LLM capabilities through secure APIs. Work with real-time systems: voice, streaming, WebSockets, and other live interaction pipelines. Ensure multi-tenant data isolation, configuration management, and performance scaling. Collaborate cross-functionally to define data contracts, agent flows, and platform intelligence requirements. Required Skills 7+ years of ML Engineering experience in production environments. Expert-level Python for ML workflows, backend services, and data pipelines. Strong experience with vector databases (Milvus, Zilliz, Pinecone, Weaviate). Experience building and deploying reinforcement learning systems. Deep hands-on experience with LLMs, RAG, prompting, scoring models, and tool calling. Experience with LangChain / LangGraph and modern LLM orchestration frameworks. Proven ability to design and optimize large-scale ML data pipelines. Production experience with real-time systems (voice, streaming, WebSockets). Proficiency with SQL and NoSQL databases. Strong understanding of microservices architecture, distributed systems, and event-driven workflows. Proficiency with Docker & Kubernetes for deployment and orchestration. Experience delivering custom LLM deployments in production. Ability to collaborate with engineering leadership and turn concepts into shipped capabilities. Nice to Have Experience with streaming data systems (Kafka, Kinesis, Pulsar). Experience with model monitoring, drift detection, and automated evaluation Background with AWS ML stack (SageMaker, Bedrock, EKS, Lambda). Experience with model compression, quantization, or accelerated inference. Familiarity with CRM data patterns or real-time ingestion (Salesforce, HubSpot, Zoho). We are committed to fostering a diverse, inclusive, and equitable workplace where all individuals are valued, respected, and empowered, regardless of their background, identity, or beliefs. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic.

This job posting was last updated on 12/16/2025

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt