$Not specified
You will run and manage open-source models efficiently while ensuring high performance and stability across resources. Collaborating with engineers, you will implement scalable and reliable model serving solutions.
Experience with model serving platforms and proficiency in GPU orchestration are essential for this role. You should also have the ability to monitor latency and scale systems efficiently.
Transform Language Models into Real-World Applications We’re building AI systems for a global audience. We are living in an era of AI transition - this new project team will be focusing on building applications to enable more real world impact and highest usage for the world. This role is a global role with hybrid work arrangement - combining flexible remote work with in-office collaboration at our HQ. You’ll work closely with regional teams across product, engineering, operations, infrastructure and data to build and scale impactful AI solutions. Why This Role Matters You’ll fine-tune state-of-the-art models, design evaluation frameworks, and bring AI features into production. Your work ensures our models are not only intelligent, but also safe, trustworthy, and impactful at scale. What You’ll Do Run and manage open-source models efficiently, optimizing for cost and reliability Ensure high performance and stability across GPU, CPU, and memory resources Monitor and troubleshoot model inference to maintain low latency and high throughput Collaborate with engineers to implement scalable and reliable model serving solutions What Is It Like Likes ownership and independence Believe clarity comes from action - prototype, test, and iterate without waiting for perfect plans. Stay calm and effective in startup chaos - shifting priorities and building from zero doesn’t faze you. Bias for speed - you believe it’s better to deliver something valuable now than a perfect version much later. See feedback and failure as part of growth - you’re here to level up. Possess humility, hunger, and hustle, and lift others up as you go. Requirements Experience with model serving platforms such as vLLM or HuggingFace TGI Proficiency in GPU orchestration using tools like Kubernetes, Ray, Modal, RunPod, LambdaLabs Ability to monitor latency, costs, and scale systems efficiently with traffic demands Experience setting up inference endpoints for backend engineers What You’ll Get Flat structure & real ownership Full involvement in direction and consensus decision making Flexibility in work arrangement High-impact role with visibility across product, data, and engineering Top-of-market compensation and performance-based bonuses Global exposure to product development Lots of perks - housing rental subsidies, a quality company cafeteria, and overtime meals Health, dental & vision insurance Global travel insurance (for you & your dependents) Unlimited, flexible time off Our Team & Culture We’re a densed, high-performance team focused on high quality work and global impact. We behave like owners. We value speed, clarity, and relentless ownership. If you’re hungry to grow and care deeply about excellence, join us. About Bjak BJAK is Southeast Asia’s #1 insurance aggregator with 8M+ users, fully owned by its employees. Headquartered in Malaysia and operating in Thailand, Taiwan, and Japan, we help millions of users access transparent and affordable financial protection through Bjak.com. We simplify complex financial products through cutting-edge technologies, including APIs, automation, and AI, to build the next generation of intelligent financial systems. If you're excited to build real-world AI systems and grow fast in a high-impact environment, we’d love to hear from you.
This job posting was last updated on 9/29/2025