via Remote Rocketship
$124000K-196000K a year
Develop core autonomous driving functionality and 3D world models using sensor fusion and deep learning for AV platforms.
3+ years in AV or robotics, strong software engineering on embedded/automotive platforms, and passion for robotics.
Job Description: • Develop core functionality for autonomous driving in all geographies based on the fusion of SOTA perception DNN and map signals. • Generate real-time 3D world model utilized by planning, incorporating a variety of inputs from sensors and external sources. • Enable HD mapless driving in complex urban scenarios by creating enriched BEV models of the world. • Build fused static obstacles and occupancy grids and build occlusion masks to enrich AV scenarios. • Be a technology leader by providing the team guidance on approaches to be taken to solve the hardest of AV problems. • Be a hands-on collaborator with the team. • Hiring and mentoring strong engineers within the team. • Make sure our algorithms work well on large amounts of real and synthetic data, in a diverse range of environments and conditions • Produce code and designs following automotive quality and safety standards. Requirements: • BS, MS, or PhD in Computer Science or related fields or equivalent experience • 3+ years of experience, with at least 2+ years in the AV or robotics industry. • Passion for robotics and autonomous vehicles • Drive to learn new things and tackle meaningful problems • Outstanding communication and cross-team collaboration especially with multinational teams across the globe. • Independent and analytical software engineering skills • Software development experience on embedded or automotive platforms. Benefits: • equity • benefits
This job posting was last updated on 2/27/2026