via Remote Rocketship
$90K - 130K a year
Design, implement, and maintain AI prompt engineering strategies and orchestration layers integrated into business workflows.
Requires 5+ years in AI/ML engineering with strong Python, SQL, and data science experience, plus familiarity with cloud and MLOps tooling.
Job Description: • Own and maintain prompt engineering strategies, including prompt versioning, testing, and optimization • Design AI workflows that combine models, prompts, tools, enterprise data, and business logic • Implement AI orchestration layers to manage multi-step reasoning, decisioning, and actions • Ensure AI systems integrate cleanly into business workflows, APIs, and user interfaces • Apply guardrails to ensure safe, explainable, and compliant AI behavior • Support in building and maintaining production-grade deployment pipelines for AI solutions • Ensure reliability, scalability, cost control, and latency optimization • Implement monitoring and observability for AI systems (usage, performance, drift, failures) • Define and enforce change control, versioning, rollback, and release management processes • Collaborate closely with data scientists, actuaries and other business functions • Validate model behavior, outputs, and assumptions from a production and business-use perspective Requirements: • 5+ years of experience in AI/ML engineering, advanced analytics, or advanced software engineering roles • Strong algorithmic and problem-solving skills • Strong programming skills in Python/PySpark and strong SQL expertise • Exposure to Data Science methods in validating AI models • Palantir Foundry and AIP experience • Hands-on experience with prompt engineering, prompt testing, and prompt lifecycle management • Experience implementing RAG architectures and similar approaches • Experience with AI orchestration frameworks, agentic patterns, and tool/function calling • Strong understanding of model evaluation, calibration techniques, and monitoring • Familiarity with model explainability, fairness, and robustness • Experience with MLOps tooling and practices • Experience working in cloud environments (AWS, Azure, or GCP) • Experience integrating AI models into production systems with monitoring, logging, and alerting • Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred) Benefits: • N/A
This job posting was last updated on 3/2/2026