Find your dream job faster with JobLogr
AI-powered job search, resume help, and more.
Try for Free
Meshy LLC

Meshy LLC

via DailyRemote

All our jobs are verified from trusted employers and sources. We connect to legitimate platforms only.

Junior Research Infrastructure Engineer

Anywhere
Full-time
Posted 2/8/2026
Verified Source
Key Skills:
Python
Distributed Systems
Data Pipelines

Compensation

Salary Range

$40K - 70K a year

Responsibilities

Design and implement distributed data pipelines and infrastructure, and build internal tools for research teams.

Requirements

Requires 2+ years in software engineering, experience with distributed frameworks, cloud platforms, and frontend development with React or Next.js.

Full Description

About Meshy Headquartered in Silicon Valley, Meshy is the leading 3D generative AI company on a mission to Unleash 3D Creativity by transforming the content creation pipeline. Meshy makes it effortless for both professional artists and hobbyists to create unique 3D assets—turning text and images into stunning 3D models in just minutes. What once took weeks and cost $1,000 now takes just 2 minutes and $1. Our world-class team of top experts in computer graphics, AI, and art includes alumni from MIT, Stanford, and Berkeley, as well as veterans from Nvidia and Microsoft. Our talent spans the globe, with team members distributed across North America, Asia, and Oceania, fostering a diverse and innovative multi-regional culture focused on solving global 3D challenges. Meshy is trusted by top developers, backed by premiere venture capital firms like Sequoia and GGV, and has successfully raised $52 Million in funding. Meshy is the market leader, recognized as the No.1 in popularity among 3D AI tools (according to 2024 A16Z Games) and No.1 in website traffic (according to SimilarWeb, with 3 Million monthly visits). The platform boasts over 5 Million users and has generated 40 Million models. Founder and CEO Yuanming (Ethan) Hu earned his Ph.D. in graphics and AI from MIT, where he developed the acclaimed Taichi GPU programming language (27K stars on GitHub, used by 300+ institutes). His work is highly influential, including an honorable mention for the SIGGRAPH 2022 Outstanding Doctoral Dissertation Award and over 2,700 research citations. About this role We are seeking a Product-Minded Junior Research Infrastructure Engineer to join our growing team. This is a "70/30" role: you will spend 70% of your time on hardcore backend and infrastructure—tackling complex distributed systems—and 30% of your time building intuitive internal tools that transform our platform capabilities into a seamless product experience for researchers. You will design, build, and operate distributed data systems that power large-scale ingestion, processing, and transformation of datasets used for AI model training. This is a versatile role: you’ll own end-to-end pipelines, ensure data quality and scalability, and collaborate closely with ML researchers to prepare diverse datasets for cutting-edge model training. You’ll thrive in our fast-paced startup environment, where problem-solving, adaptability, and wearing multiple hats are the norm. Key Responsibilities Distributed Systems & Orchestration (70% - Core Engineering) • Participate in the design and implementation of distributed task orchestration systems using Temporal or Celery. • Architect pipelines across cloud object storage (S3, GCS), data lakes, and metadata catalogs. • Implement partitioning, sharding, and caching strategies to ensure data processing pipelines are resilient, highly available, and consistent. Core Data Pipelines • Design, implement, and maintain distributed ingestion pipelines for structured and unstructured data (images, 3D/2D assets, binaries). • Build scalable ETL/ELT workflows to transform, validate, and enrich datasets for AI/ML model training and analytics. Pretrain Data Processing • Support preprocessing of unstructured assets (e.g., images, 3D/2D models, video) for training pipelines, including format conversion, normalization, augmentation, and metadata extraction. • Implement validation and quality checks to ensure datasets meet ML training requirements. • Collaborate with ML researchers to quickly adapt pipelines to evolving pretraining and evaluation needs. Infrastructure & DevOps • Use infrastructure-as-code (Terraform, Kubernetes, etc.) to manage scalable and reproducible environments. • Manage data assets using Databricks Asset Bundles (DABs) and build rigorous CI/CD pipelines (GitHub Actions). • Focus on maximizing cluster utilization (CPU/Memory) and optimizing EC2 instance allocation to aggressively reduce compute costs. Product & Internal Tooling (30% - Interface & DevEx) • Take ownership of the platform’s "Interface" by building Data Explorers and management consoles using React or Next.js. • Actively listen to researchers and data scientists to iterate on UI/UX based on their feedback. • Simplify complex CLI operations into intuitive GUI interactions to boost overall developer experience (DevEx). Required Qualifications Technical Background • 2+ years of experience in software engineering, backend development, or distributed systems. • Strong programming skills in Python (plus Scala/Java/C++ a plus). • Familiarity with distributed frameworks (Spark, Dask, Ray) and cloud platforms (AWS/GCP/Azure). • Experience with workflow orchestration tools (Temporal, Celery, or Airflow). • Proficiency with Infrastructure as Code (Terraform) and CI/CD tools (GitHub Actions). Frontend & User Experience • Experience building web applications or internal tools using React or Next.js. • A "product-first" mindset: an interest in how users interact with infrastructure and a desire to build clean, functional interfaces. Domain Skills (Preferred) • Experience handling large-scale unstructured datasets (images, video, binaries, or 3D/2D assets). • Familiarity with AI/ML training data pipelines, including dataset versioning, augmentation, and sharding. • Exposure to computer graphics or 3D/2D data processing. Mindset • The 70/30 Specialist: You enjoy deep systems engineering but are equally excited to build the UI that makes those systems accessible. • Comfortable in a startup environment: versatile, self-directed, pragmatic, and adaptive. • Strong problem solver who enjoys tackling ambiguous challenges and "0 to 1" building. Preferred Qualifications • Kubernetes (K8s) for distributed workloads and cluster orchestration. • Data lakehouse platforms (specifically Databricks and DABs). • Familiarity with GPU-accelerated computing and HPC clusters. • Experience with 3D/2D asset processing (geometry transformations, rendering pipelines). • Located in or near one of our employee hubs — Bay Area, CA; Seattle, WA Our Values • Brain: We value intelligence and the pursuit of knowledge. Our team is composed of some of the brightest minds in the industry. • Heart: We care deeply about our work, our users, and each other. Empathy and passion drive us forward. • Gut: We trust our instincts and are not afraid to take bold risks. Innovation requires courage. • Taste: We have a keen eye for quality and aesthetics. Our products are not just functional but also beautiful. Why Join Meshy? • Competitive salary, equity, and benefits package. • Opportunity to work with a talented and passionate team at the forefront of AI and 3D technology. • Flexible work environment, with options for remote and on-site work. • Opportunities for fast professional growth and development. • An inclusive culture that values creativity, innovation, and collaboration. • Unlimited, flexible time off. Benefits • Stock options available for core team members. • 401(k) plan for employees. • Comprehensive health, dental, and vision insurance. • The latest and best office equipment.

This job posting was last updated on 2/10/2026

Ready to have AI work for you in your job search?

Sign-up for free and start using JobLogr today!

Get Started »
JobLogr badgeTinyLaunch BadgeJobLogr - AI Job Search Tools to Land Your Next Job Faster than Ever | Product Hunt