All Three Roles Require
Role 1 — Edge AI & Perception Engineering
You're the engineer who takes a model that a scientist trained and makes it run efficiently, and reliably, on embedded hardware. You're not a researcher, but you understand the stack from PyTorch model to TensorRT-optimized inference on the edge. You’ll own the perception pipeline: getting sensor data in, running models, and getting usable output to the rest of the system
Role 2 — SLAM, CV & Spatial Perception
You’re a perception and localization engineer at home with 3D sensing, point clouds, and understanding where a robot is in the world. You’ll own the spatial perception side of the stack, SLAM, mapping, and the geometric understanding the robot needs to navigate and interact with its environment.
Role 3 — Platform Architecture & Software Quality
The robotics software framework exists and works, but it was built by scientists moving fast — it’s not as maintainable, extensible, or performant as it needs to be. You’re the engineer who looks at that codebase and knows how to make it better: cleaner architecture, better abstractions, performance improvements, Python-to-C++ migration where it counts. You care about code quality and know how to improve a system without breaking it.