13 open positions available
Transform complex logistics workflows into performant product experiences with real-time data dashboards and AI-driven features. | 3-5+ years experience with TypeScript, React, Tailwind CSS, Python backend, SQL, and AI integration curiosity. | About Us Global trade still runs on outdated, manual workflows - we are fixing that by building AI agents for the logistics industry. Our AI works alongside humans, automating document-heavy tasks so companies can process shipments faster and with fewer errors. We have moved past the "zero-to-one" phase and have achieved clear product-market fit. We are currently seeing rapid traction with >100% MoM revenue growth and are already deployed with customers processing meaningful operational volume. We've raised $5M from First Round Capital and Pear VC and are now scaling our platform's breadth and depth. Our deeply technical team comes from Google, LinkedIn, Salesforce and top schools and AI research labs. The Role We are looking for a Frontend Engineer to help us scale our AI-driven platform. In this role, you will be the bridge between sophisticated backend data and the human operators who rely on it. You will be responsible for turning complex logistics workflows into crisp, high-performance product experiences. This is a high-leverage role for a builder who thrives in a high-growth environment and wants to own the evolution of a product that is already delivering real value. What You’ll Do Ship at Startup Speed: Move from conceptual wireframes to production-ready features within tight cycles, maintaining high velocity without sacrificing code quality. Build Data-Heavy Interfaces: Design and maintain responsive dashboards using TypeScript, React, and Tailwind CSS that handle real-time logistics data at scale. Human-in-the-Loop Design: Build intuitive interfaces for operations teams that make AI-driven workflows transparent, explainable, and easy to manage. End-to-End Ownership: Take full ownership of the product lifecycle, from technical discovery and scoping to production monitoring and optimization. Collaborate on AI Features: Partner with the engineering team to ship AI features that operators trust, defining quality bars for accuracy and confidence. Who We're Looking For Experience: 3–5+ years of professional software engineering experience, ideally in a fast-paced, high-growth environment. Frontend Proficiency: Deep skills in TypeScript, React, and Tailwind CSS, with the ability to build complex interfaces that make sophisticated data intuitive. Full-Stack Competency: Strong command of Python for backend development and solid knowledge of SQL (PostgreSQL) for database design. Execution Mindset: You write clean code, drive delivery, and "sweat the edge cases" to ensure a seamless user experience. AI Curiosity: Familiarity with (or a strong desire to learn) integrating AI APIs and handling messy document inputs. In-Person Collaboration: You are energized by working shoulder-to-shoulder with founders and engineers in our San Francisco office. Success Metrics Architectural Velocity: Ability to move from concept to production-ready systems within aggressive startup cycles. System Resilience: Maintaining high uptime and low error rates for core logistics services during periods of massive scale. Product Impact: Measurable improvement in logistics efficiency and user adoption of AI-driven features. Perks Founder-level impact and meaningful equity during a period of massive scale. Free office lunches and dinner. Comprehensive health insurance and 401k plan.
Ensure production environment reliability, own CI/CD pipelines, manage AI infrastructure, and build internal tools to accelerate deployments. | Strong backend skills in Python, Go, or Java, experience with Terraform, Docker, Kubernetes, and cloud platforms like GCP. | About Us Global trade still runs on outdated, manual workflows - we are fixing that by building AI agents for the logistics industry. Our AI works alongside humans, automating document-heavy tasks so companies can process shipments faster and with fewer errors. We have moved past the "zero-to-one" phase and have achieved clear product-market fit. We are currently seeing rapid traction with >100% MoM revenue growth and are already deployed with customers processing meaningful operational volume. We've raised $5M from First Round Capital and Pear VC and are now scaling our platform's breadth and depth. Our deeply technical team comes from Google, LinkedIn, Salesforce and top schools and AI research labs. The Role We are looking for a Production Engineer who lives at the intersection of software development and systems engineering. Your mission is to ensure our production environment is rock-solid, automated, and observable. You will own our CI/CD pipelines, manage our AI infrastructure, and build the internal tools that empower our development team to ship code faster and more reliably. Key Responsibilities 1. Reliability & Infrastructure (The Core) Availability: Own the "uptime" of our services. Design and implement self-healing systems to minimize downtime and manual intervention. CI/CD & Deployments: Architect and manage robust deployment pipelines to ensure feature releases are seamless and reversible. AI Infrastructure: Manage specialized pipelines for AI and human-in-the-loop systems Databases and compliance: Manage database operations, performance tuning, backups, compliance. Scalability: Monitor system performance and proactively scale infrastructure to handle traffic spikes. 2. Observability & Metrics Monitoring: Build and maintain comprehensive dashboards using tools like Prometheus, Grafana, or Datadog. Alerting: Define and implement "Golden Signals" (Latency, Traffic, Errors, and Saturation) to ensure we know about issues before our customers do. Incident Response: Lead the "Post-Mortem" process - analyzing why things broke and writing code to ensure they never break the same way twice. 3. Internal Tooling & Backend Development Custom Tooling: Use your backend skills (Python preferably) to build internal CLI tools, automated scripts, and status dashboards. Developer Experience: Act as a bridge for the dev team, making "the right way to deploy" the "easiest way to deploy." Technical Requirements Backend Proficiency: Strong experience in at least one backend language (e.g., Python, Go, Java) to contribute to internal tools and understand application logic. Infrastructure as Code (IaC): Hands-on experience with Terraform, CloudFormation, or Ansible. Containerization: Deep knowledge of Docker and orchestration (Kubernetes/ECS). Cloud Platforms: Good-level knowledge of GCP CI/CD Tools: Experience with GitHub Actions, GitLab CI, or Jenkins. Success Metrics (The "How We'll Measure You") To be successful in this role, you will be responsible for improving and maintaining: MTTD/MTTR: Mean Time to Detect and Mean Time to Recover from incidents. Deployment Frequency: How often we can safely ship code to production. Change Failure Rate: The percentage of deployments that result in a rollback or failure. SLA/SLO Compliance: Meeting our uptime and performance targets for customers. Is this the right fit? You are a great fit if: You find yourself "automating away" repetitive tasks and get genuinely excited when you see a perfectly tuned Grafana dashboard. You don't just want to write code; you want to see that code survive and thrive in the wild.
Build and deploy production-ready backend services handling AI integrations and API design across the logistics stack. | 3-5 years experience with production systems, proficiency in Python, distributed systems, message queues, and cloud infrastructure. | About Us Global trade still runs on outdated, manual workflows - we are fixing that by building AI agents for the logistics industry. Our AI works alongside humans, automating document-heavy tasks so companies can process shipments faster and with fewer errors. We have moved past the "zero-to-one" phase and have achieved clear product-market fit. We are currently seeing rapid traction with >100% MoM revenue growth and are already deployed with customers processing meaningful operational volume. We've raised $5M from First Round Capital and Pear VC and are now scaling our platform's breadth and depth. Our deeply technical team comes from Google, LinkedIn, Salesforce and top schools and AI research labs. The Role As a Backend Engineer at Amari AI, you aren't just writing code - you are building the engine of a high-growth AI startup. We are looking for a "builder" in the truest sense: someone with the professional experience to write clean and scalable code. You will play a key role in bringing our most important features to life. We’re looking for someone who enjoys the flow of building high-quality backend systems and finds it rewarding to ship production-ready code that solves real-world logistics challenges at scale. What You’ll Do Ship Fast & Often: Build and deploy production-ready services across our logistics stack - handling everything from AI integrations to API design. Own Feature Lifecycle: Take a product requirement from a whiteboard sketch to a fully monitored, scalable backend service. Optimize Data Flow: Implement and refine database schemas (SQL/NoSQL) and indexing strategies to ensure our real-time logistics data remains lightning-fast. Iterate on AI Infrastructure: Work closely with the AI team to build the "plumbing" that allows our models to process massive logistics datasets in real-time. Maintain Excellence: Write tests and documentation that allow us to move fast without breaking things. Who We’re Looking For 3–5 years of professional experience: You’ve moved past the "junior" phase and have a solid grasp of how to build and maintain production systems. Bias for Action: You prefer "shipping and iterating" over "over-architecting and debating." Technical Toolkit: Proficiency inPython, experience with distributed systems, message queues (e.g., Redis, RabbitMQ), and cloud infrastructure. Pragmatic Problem Solver: You know when to use a simple solution to get a feature out the door and when a problem requires a deep architectural dive. In-Person Collaboration: You want to be in the room where it happens, working side-by-side with the founding team in San Francisco. Success Metrics Velocity: You consistently ship high-quality features within sprint cycles. Execution Quality: Your code is modular, well-tested, and doesn't require constant "babysitting" once in production. Problem Ownership: You identify bottlenecks in the backend and fix them without being asked. System Performance: The services you build meet latency and throughput targets as our data volume scales. Is This the Right Fit? You will love this role if: You get a dopamine hit from seeing your code hit production and solve real user problems. You are a "generalist" who is happy to jump into any part of the backend to unblock the team. You want to be a foundational member of a team where your individual output directly moves the company's valuation. This might not be for you if: You prefer a slow, methodical pace with multiple layers of approval before shipping. You want to focus solely on high-level architecture rather than daily implementation. You are uncomfortable with the "pivot and polish" nature of early-stage startups. Perks Meaningful equity in a fast-growing, seed-stage startup. Daily catered lunches and dinners in our SF office. Comprehensive health, dental, and vision insurance. 401k plan and all the tools you need to build at your best.
Providing technical support and troubleshooting complex enterprise IT infrastructure issues. | Experience in technical support, system troubleshooting, and infrastructure management, but no backend software engineering experience. | About Us Global trade still runs on outdated, manual workflows - we are fixing that by building AI agents for the logistics industry. Our AI works alongside humans, automating document-heavy tasks so companies can process shipments faster and with fewer errors. We have moved past the "zero-to-one" phase and have achieved clear product-market fit. We are currently seeing rapid traction with >100% MoM revenue growth and are already deployed with customers processing meaningful operational volume. We've raised $5M from First Round Capital and Pear VC and are now scaling our platform's breadth and depth. Our deeply technical team comes from Google, LinkedIn, Salesforce and top schools and AI research labs. The Role As a Senior Backend Engineer at Amari AI, you will own the core data infrastructure powering our AI-driven logistics platform. This is a high-ownership, zero-to-one role - you will design and scale the systems that ingest, process, and serve real-time logistics data across AI pipelines, queues, databases, and search layers. You will set technical direction for the backend, operate with significant autonomy, and have an outsized impact on product reliability and performance from day one. What You’ll Do Architect and scale distributed backend systems - design queue pipelines, define service boundaries, and build for failure with retry strategies, circuit breakers, and graceful degradation Own database design and reliability - schema design, indexing, and query optimization for the right use cases, and data pipelines that keep systems in sync at scale Build AI search infrastructure - for real-time logistics lookup and AI-driven features Lead technically in a fast-moving startup - set engineering standards, collaborate directly with founders on the roadmap, and help hire and mentor future backend engineers Who We're Looking For 5–8 years of professional backend software engineering experience, with meaningful time in a high-growth startup or high-scale production environment Demonstrated ability to own and ship backend systems end-to-end in fast-moving, ambiguous environments Track record of making sound architectural trade-offs under time pressure and revisiting decisions as requirements evolve In-Person Collaboration: You are energized by working shoulder-to-shoulder with founders and engineers in our San Francisco office. Success Metrics System Reliability: Core logistics services and queue-driven workflows maintain high uptime and low error rates under production load Throughput & Latency: Message processing pipelines and search queries meet defined SLAs as data volumes grow Architectural Quality: Backend systems are extensible, well-documented, and enable other engineers to build on top of your foundations with confidence Velocity: Ability to scope, build, and ship new backend capabilities within sprint cycles without sacrificing correctness or resilience Autonomy: Resolve complex technical blockers independently and surface the right issues at the right time Is This the Right Fit? You will love this role if • You thrive in the zero-to-one phase and are energized by the absence of legacy constraints and bureaucracy • You make principled architectural decisions quickly and are comfortable revisiting them as the product evolves • You prefer being handed a problem over a set of tickets, and you own the outcome — not just the code • You have strong opinions about backend systems, hold them loosely, and defend your reasoning with clarity This might not be for you if • You need large team structures, extensive documentation, or clearly defined quarterly roadmaps to be productive • You prefer isolated, single-component work with a narrowly bounded scope • Ambiguity and rapid pivots common in early-stage startups are uncomfortable for you Perks Founder-level impact and meaningful equity during a period of massive scale. Free office lunches and dinner. Comprehensive health insurance and 401k plan.
Build and scale high-performance, data-heavy dashboards and AI-integrated workflows, ensuring reliability and user trust. | 3-5+ years of software engineering experience, proficiency in TypeScript, React, Tailwind CSS, SQL, and familiarity with AI API integration. | About Us Global trade still runs on outdated, manual workflows. We are fixing that by building AI agents for the logistics industry. Our AI works alongside humans, automating document-heavy tasks so companies can process shipments faster and with fewer errors. We have moved past the "zero-to-one" phase and have achieved clear product-market fit. We are currently seeing rapid traction with >100% MoM revenue growth and are already deployed with customers processing meaningful operational volume. We’ve raised $4.5M from First Round Capital and Pear VC, we are now scaling our platform's breadth and depth across larger enterprise accounts. Our deeply technical team comes from Google, LinkedIn, and top schools. The Role We are looking for a Full Stack Engineer (Frontend Leaning) to help us scale our AI-driven platform. As we grow, you will be responsible for turning complex, messy logistics workflows into crisp, high-performance product experiences. You will bridge the gap between sophisticated backend data and the human operators who rely on it, ensuring our AI features are trusted, explainable, and built for real-world exceptions. This is a high-leverage role for a builder who thrives in a high-growth environment and wants to own the evolution of a product that is already delivering real value. What You’ll Do Scale Frontend Architecture: Design and maintain responsive, data-heavy dashboards using TypeScript, React, and Tailwind CSS that handle real-time logistics data at scale. Design Human-in-the-Loop: Build intuitive interfaces for operations teams that make AI-driven workflows transparent and easy to manage. Lead Technical Execution: Take full ownership of the product lifecycle, from technical discovery and scoping to production monitoring and optimization. Maintain System Reliability: Ensure high uptime and low error rates for core services as we scale our customer base. Collaborate on AI Features: Partner with the engineering team to ship AI features that operators trust, defining quality bars for accuracy and confidence. Mentor and Scale: Contribute to the engineering culture, mentor new hires, and help define the technical vision as the company expands. Who We're Looking For Required experience: 3-5+ years of professional software engineering experience, ideally in a fast-paced, high-growth environment. Team: have worked in small environments Frontend Expertise: Deep proficiency in TypeScript, React, and Tailwind CSS, with the ability to build complex interfaces that make sophisticated data intuitive. Full-Stack Competency: Strong command of Python for backend development and advanced knowledge of SQL (PostgreSQL) for database design and query optimization. Execution Mindset: You write crisp requirements, drive cross-functional delivery, and sweat the edge cases. AI Curiosity: Familiarity with (or a strong desire to learn) integrating AI APIs and handling messy document inputs. In-Person Collaboration: You are energized by working shoulder-to-shoulder with founders and engineers in our San Francisco office. Success Metrics Feature Velocity: Ability to move from conceptual wireframes to production-ready features within defined cycles. System Reliability: Maintaining high uptime for core logistics services and AI-driven workflows. Product Impact: Success of implemented features based on user adoption and internal logistics efficiency metrics. Autonomy: Demonstrated ability to resolve complex technical roadblocks with minimal supervision.
Lead technical product demos, answer detailed technical questions, understand client workflows, and collaborate on product feedback. | 3-5+ years in technical sales or solutions engineering, ability to explain complex concepts to non-technical audiences, familiarity with APIs and integrations, and experience in B2B SaaS sales. | Sales Engineer About Endless Commerce Endless Commerce is an AI-powered CommerceOS that helps consumer brands manage the chaos of omnichannel operations. We serve brands scaling from $1M to $500M+ in revenue, providing real-time inventory visibility, EDI and purchase order automation, and supply chain intelligence across DTC, wholesale, marketplaces, and retail channels. Our clients include brands like Food52, Great Jones, Heyday, Tin Can, Pattern Brands and more-- we're growing fast and our nimble team needs your help keeping up! The Role You're the translator between product and sales—the person who can talk technical details with our engineering team and then explain those capabilities to a brand operator in a way that makes them say "yes, THIS is what I need." You'll conduct product demos, answer technical questions during the sales process, help scope custom implementations, and provide feedback to the product team based on what you're hearing from prospects. What You'll Own Lead technical product demonstrations for qualified prospects, tailoring the demo to their specific use case and pain points Answer detailed technical questions during the sales process about integrations, data handling, security, and customization capabilities Work with prospects to understand their current tech stack and operational workflows, then articulate how Endless Commerce fits in Collaborate with the product team to communicate prospect needs, common objections, and feature requests that come up repeatedly in sales conversations Create technical sales collateral, including integration guides, comparison docs, and implementation timelines Support the sales team with technical scoping for custom deals and enterprise clients Occasionally join customer calls post-sale to ensure smooth handoff to the product/implementation team What Success Looks Like Month 1: Shadow all demos and sales calls, learn the product inside and out, understand common objections and pain points Month 2-3: Begin leading demos independently with high conversion rates; provide valuable product feedback that influences roadmap Month 4+: Recognized as the technical expert who can handle any prospect question; demos consistently convert at or above company benchmarks You're a Great Fit If You have 3-5+ years in a sales engineering, solutions engineering, or similar technical sales role You can explain complex technical concepts to non-technical audiences without being condescending You understand ecommerce operations, inventory management, and the challenges of omnichannel commerce You're comfortable with APIs, integrations, data flows, and how different systems connect You're an excellent communicator on both sides—you can talk shop with engineers and explain benefits to operators You're consultative in your approach—you're solving problems, not just pitching features You have experience with B2B SaaS sales cycles and understand how to navigate multiple stakeholders Logistics Full-time, remote position Compensation: $90,000-$120,000 base salary + commission depending on experience Generous vacation policy and paid time off Health, dental, and vision insurance 401(k) with company match A collaborative culture that values growth and work-life balance
You will lead key pieces of product design, including creating wireframes, prototypes, and polished UI. Additionally, you will work on landing pages, user dashboards, and technical documentation. | We are looking for a designer with good taste, clear thinking, and a bias to ship. Comfort with Figma and the ability to explain design choices in plain language are essential. | We’re looking for a hands-on designer to lead key pieces of our product. This role is in person, 5 days/week in San Francisco with the potential of contract to full-time hire. Rate: $50–$100/hr All levels of experience are encouraged to apply. What you’ll work on Product UI/UX — wireframes → prototypes → polished UI Landing page — clear story, strong visuals User dashboard — simple, data-friendly layout Technical docs — clean architecture and readable components What we’re looking for Good taste, clear thinking, and a bias to ship Comfortable in Figma (auto-layout, components, basic prototyping) Can explain design choices in plain language Portfolio or samples required — class and internships projects are ok About us Growl is a contextual ad engine that personalizes chat content to serve the right creative at the right moment. Janak — MIT B.S./M.S.; built grid-scale AI at AutoGrid (acquired by Schneider Electric) Sahil — Published researcher; led AI infrastructure at Google and Microsoft. We have raised $3.5+ million from top tier VC (Pear and Audacious ventures). How to apply: Email your portfolio link to hello@withgrowl.com with subject “Product Designer — SF.” Include your availability and a sentence on the project you’re proudest of.
Design, build, and scale AWS-based infrastructure and data pipelines supporting AI-driven features including data ingestion, orchestration, and observability. | 4+ years in platform or data engineering with proficiency in Python, Bash, YAML, containerization, cloud architecture, infra-as-code, and exposure to ML/AI workflows. | About the role You’ll own the foundation of Known’s product infrastructure across mobile, web, and agentic AI systems. From data pipelines to cloud infra, you’ll design, build, and scale the platform that powers matching, voice, and scheduling features. Responsibilities Design and manage AWS-based infrastructure, codified in Terraform. Build and maintain data ingestion/orchestration pipelines (Airflow, Dagster, or equivalent). Administer and optimize PostgreSQL (with pgvector) and data warehouse environments. Support data modeling and schema design for user profiles, matching, and conversation logs. Collaborate with AI/ML engineers to productionize models (training, inference, monitoring). Implement observability (logging, metrics, alerts) across the stack. Requirements 4+ years in platform, infra, or data engineering. Proficiency in Python, Bash, YAML; experience with containerization (Docker, Kubernetes). Strong knowledge of cloud architecture, data pipelines, and infra-as-code. Exposure to ML/AI workflows and feature stores a plus. Example Projects Stand up a data lake + warehouse for storing structured behavioral signals. Build ingestion from app + third-party APIs (e.g. Stripe, OpenAI, Twilio). Scale infra for real-time voice agent calls and user profile matching.
Design, build, and scale AWS-based infrastructure and data pipelines, support data modeling, collaborate with AI/ML engineers, and implement observability across the stack. | 4+ years in platform, infrastructure, or data engineering with proficiency in Python, Bash, YAML, containerization, cloud architecture, and infra-as-code; ML/AI workflow exposure is a plus. | About the role You’ll own the foundation of Known’s product infrastructure across mobile, web, and agentic AI systems. From data pipelines to cloud infra, you’ll design, build, and scale the platform that powers matching, voice, and scheduling features. Responsibilities • Design and manage AWS-based infrastructure, codified in Terraform. • Build and maintain data ingestion/orchestration pipelines (Airflow, Dagster, or equivalent). • Administer and optimize PostgreSQL (with pgvector) and data warehouse environments. • Support data modeling and schema design for user profiles, matching, and conversation logs. • Collaborate with AI/ML engineers to productionize models (training, inference, monitoring). • Implement observability (logging, metrics, alerts) across the stack. Requirements • 4+ years in platform, infra, or data engineering. • Proficiency in Python, Bash, YAML; experience with containerization (Docker, Kubernetes). • Strong knowledge of cloud architecture, data pipelines, and infra-as-code. • Exposure to ML/AI workflows and feature stores a plus. Example Projects • Stand up a data lake + warehouse for storing structured behavioral signals. • Build ingestion from app + third-party APIs (e.g. Stripe, OpenAI, Twilio). • Scale infra for real-time voice agent calls and user profile matching.
Design, build, and scale AWS infrastructure and data pipelines, administer PostgreSQL and data warehouses, collaborate with AI/ML teams, and implement observability. | 4+ years in platform or infra engineering, proficiency in Python, Bash, YAML, containerization, cloud architecture, infra-as-code, and exposure to ML/AI workflows. | 🚀 Infra / Platform Engineer About the role You’ll own the foundation of Known’s product infrastructure across mobile, web, and agentic AI systems. From data pipelines to cloud infra, you’ll design, build, and scale the platform that powers matching, voice, and scheduling features. Responsibilities Design and manage AWS-based infrastructure, codified in Terraform. Build and maintain data ingestion/orchestration pipelines (Airflow, Dagster, or equivalent). Administer and optimize PostgreSQL (with pgvector) and data warehouse environments. Support data modeling and schema design for user profiles, matching, and conversation logs. Collaborate with AI/ML engineers to productionize models (training, inference, monitoring). Implement observability (logging, metrics, alerts) across the stack. Requirements 4+ years in platform, infra, or data engineering. Proficiency in Python, Bash, YAML; experience with containerization (Docker, Kubernetes). Strong knowledge of cloud architecture, data pipelines, and infra-as-code. Exposure to ML/AI workflows and feature stores a plus. Example Projects Stand up a data lake + warehouse for storing structured behavioral signals. Build ingestion from app + third-party APIs (e.g. Stripe, OpenAI, Twilio). Scale infra for real-time voice agent calls and user profile matching.
Build and iterate on mobile features with React Native, maintain backend APIs in Node.js/TypeScript, integrate third-party APIs, contribute to React frontend components, and collaborate with AI/ML engineers. | 4+ years as fullstack or frontend/backend engineer with strong TypeScript, React, React Native, Node.js skills, API design and integration experience, and a startup mindset. | About the role You’ll work across the Known product surface: mobile features, web backend, and API integrations. You’ll ship quickly, leverage AI coding tools (e.g. Cursor, Copilot), and help us deliver a polished, consumer-grade experience. Responsibilities Build and iterate on mobile features with React Native. Extend and maintain backend APIs in Node.js/TypeScript. Integrate with third-party APIs (OpenAI/Anthropic, reservations, payments, messaging). Contribute to frontend components in React (web app). Collaborate with AI/ML engineers to expose model-powered features via APIs/UI. Requirements 4+ years as a fullstack or frontend/backend engineer. Strong with TypeScript, React, React Native, Node.js. Experience with API design, integration, and scaling. Startup mindset: comfortable working across the stack and learning fast. Example Projects Build mobile onboarding flow with AI-assisted profile setup. Integrate payments and reservations APIs for seamless date planning. Create personalized UI components powered by ML model outputs.
As a Software Engineer at Advex, you will build the core product and scale the platform for large companies. You will also contribute to the architectural roadmap, laying the foundation for the Advex platform. | Candidates should have 2 to 5 years of industry experience in full stack and deep learning, with a proven track record of delivering complex projects. Experience in end-to-end ML application development is essential, and prior startup experience is a bonus. | Job Overview At Advex, we're working on solving the hardest problem in all of deep learning – data collection. In order to train a reliable AI model, you need access to the right training data. The inability to gather the right data has been the primary obstacle in global adoption of computer vision in mission critical industries like manufacturing and industrial automation. At Advex, we've reimagined a new way to develop AI models by transforming generative models into data agents. Our technology improves model performance by 300%+ for our Fortune 500 customers in just a few hours! About Us Advex is a seed stage tech startup backed by Construct Capital, PearVC, and the founders of Dropbox and Gradient Ventures. Our team comes with over a decade of industry and research experience from top organizations and labs like Caltech, Berkeley AI Research Lab, Google Brain, Waymo, and Qualcomm. Role Description As a SWE at Advex, you will play a pivotal role in shaping the company's technical direction. Your hands-on involvement will include working with our team to (1) build our core product and scale our platform into the hands of some of the largest companies in the world (2) contribute heavily to the architectural roadmap, laying out the foundation for the Advex platform. Tech Stack: React, TypeScript Python, Rust We are looking for individuals who can lead, are autonomous, and can make decisions with limited information. About You We are seeking a candidate who embodies the following qualifications and characteristics: 2 to 5 years of industry experience in full stack and deep learning Experiences in end to end ML application development, including data engineering, model tuning, and model serving Prior experience working at a startup (bonus) Deep Learning expertise includes (bonus) Computer vision Diffusion Proven track record of rapidly delivering complex project within tight deadlines Our Values Extreme Ownership – Own your piece to an unreasonable level – if that piece succeeds or fails, for whatever reason, the buck stops with you. Fierce Velocity – Rome was not built in a day, but they were laying bricks every hour. The Best Part is No Part – Simplicity over complexity. Be Epsilon Greedy – Greatness cannot always be planned; be willing to explore novel ideas even if things don’t work out. Customer Obsessed – Our customers are the backbone of humanity. It is our duty to improve and make them better in order to exponentially improve the lives of billions of people. Feedback is the breakfast of champions – Embrace feedback. A healthy team is one where constructive feedback is welcomed and expected. Argmax (team) – The only way to win is together! One Team. One dream. Note: we are a company founded by immigrants, and we are committed to providing support to immigrant workers throughout their journey. This includes offering assistance to international students and workers with various types of visas, such as OPT, H1, EB, and more.
Collaborate with structural engineers to build a system for building data collection and reasoning, develop a large geospatial database, write and review code, support technical integrations and customers. | 5+ years catastrophe modeling experience or PhD, structural engineering or risk analysis background, Python programming, database knowledge, strong communication skills, and ability to work onsite in San Francisco Bay Area. | About Us: Rising disasters—from earthquakes to wildfires—are destabilizing the property insurance market, yet carriers often rely on outdated, incomplete data. ResiQuant is changing that. Founded by Stanford PhDs and backed by a $4M seed round led by LDV Capital, we fuse structural engineering with advanced, agentic AI to expose critical vulnerabilities that standard sources miss. Our multi-hazard platform delivers building-level insights so insurers across the U.S. can underwrite disaster-exposed properties with confidence, maintain coverage in high-risk regions, and reward resilience where it matters most— paving the way for a safer, more sustainable future. About You: We're seeking an individual who is passionate about the mission of the company to join us as Catastrophe Engineer with focus on disaster exposure. We prize candidates who share our company's vision and are ready to help foster an inclusive and collaborative culture. As a lean seed startup, we need someone with a scrappy, hands-on approach, eager to evolve alongside our team, and support the company in all stages of growth. The ideal candidate is excited to apply their knowledge in catastrophe modeling, data science, and software development, to shape the trajectory of a groundbreaking company. Qualifications: • 5+ years of experience with the major catastrophe models used by insurance companies (RMS and Verisk) for hurricane, earthquake, severe convective storm, and wildfire modeling. OR PhD in relevant field. • Technical understanding about why buildings survive or fail during hurricanes and wildfires. • Understanding of statistical concepts and practical experience applying them (in A|B testing, causal inference, ML, etc.). • Experience in programming/modeling in Python. • Knowledge of database systems. • Background in structural engineering and/or risk analysis. What will make you stand out: • Proficiency/Experience in software development. • Proficiency/Experience working with multimodal data sources (e.g., voice, imagery, text) for AI model training. • Proficiency/Experience collecting and interpreting data from interviews. • Experience using Multi-modal LLMs. What drives us: • Impact: we are driven by a shared mission to address a paramount challenge of our time • Resolve: we believe that hard work and resilience yield extraordinary outcomes • Urgency: we are motivated to outpace rapid urbanization and escalating disaster impacts Why join RQ: • Opportunity to be involved in an early-stage startup and build the culture you want to see. Chance to pioneer and disrupt the $200B property insurance industry • Experience firsthand the tangible impact of what you build. Day to day: • Collaborate with experienced structural engineers in building a system that collects data and reasons about buildings as a structural engineer • Participate in product ideation and development. • Architect and develop a large geospatial database to host multimodal building data and context that will grow over time. • Write, test, document, and review code according to RQ’s development standards that you would help to define. • Support the founders in technical integrations with customer systems • Support customers using the ResiQuant platform and handle domain specific questions What we offer: • Competitive salary commensurate with experience • Equity in the company as a founding member • Vibrant tech startup environment • Competitive company 401(k) program with company matching • Health insurance • Working on the challenge of our generation with other passionate people Ideal Engineer Profile • Location: San Francisco Bay Area (In person) • Bachelors: Civil or Mechanical Engineering, or Math with relevant experience • Masters: Structural engineering, Risk Analysis, or Catastrophe modeling. • Experience in catastrophe modeling using RMS and/or Verisk models for most perils • Experience managing large data set of building and associated data • Experience in programming and data analysis with Python • Persistence and adaptability working through setbacks and change of directions. • Skilled at delegating tasks while staying hands-on with critical development. • Excellent written and verbal communication skills, with the ability to convey complex ideas clearly to diverse audiences.
Create tailored applications specifically for Pear VC with our AI-powered resume builder
Get Started for Free