Locating Landmarks on the Fly AI model identifies stationary objects from radar scans.

Published
Reading time
2 min read
Fragment of a video explaining a model that extracts landmarks on the fly from radar scans

Directions such as “turn left at the big tree, go three blocks, and stop at the big red house on your left” can get you to your destination because they refer to stationary landmarks. New research enables self-driving cars to identify such stable indicators on their own.

What’s new: Dan Barnes and Ingmar Posner of Oxford University built a model that extracts landmarks on the fly from radar scans to build maps for autonomous vehicles. Radar is challenging in this application because it generates noise and ghost images, but it has the benefits of long range, high refresh rate, and robustness to environmental conditions. This video explains.

Key insight: Self-driving cars often navigate by recognizing landmarks. The researchers realized that neural networks can discover them by reversing the task: The radar signals most valuable to navigation are likely stable features of the landscape.

How it works: The system learns to identify keypoints that best predict a car’s motion. The training data specifies a vehicle’s motion from radar frame to radar frame.

  • A U-Net architecture transforms each radar frame into potentially useful keypoints. It predicts vectors for each one representing its position, description, and usefulness for navigation.
  • A separate algorithm compares the position of keypoints with similar descriptions in successive frames. It uses the differences in their positions to predict the car’s motion. The keypoints that are most useful in performing this task are likely to be stable.
  • Using the description vector, the system can match keypoints from different perspectives. This enables it to map loops in a route, a challenging problem for earlier methods that process entire radar frames rather than keypoints.

Results: The system’s error in predicting the car’s position after driving a fixed distance was 2.06 percent, compared to the previous state of the art, 3.72 percent. Similarly, the error in the car’s predicted orientation fell from 0.0141 to 0.0067 degrees per meter driven. The new system ran an order of magnitude faster. For routes that didn’t include a loop, an earlier whole-frame approach cut the predicted position error to 1.59 percent and rotation error to 0.0044 degrees per meter.

Why it matters: The ability to generate keypoints automatically is making waves in other computer vision tasks. Combining keypoints with vector descriptions makes it possible to learn valuable things about them, from whether they indicate a loop in the route to recognizing a habitual parking space.

We’re thinking: Our surroundings are always changing: Outdoors, trees fall down and buildings go up, while indoors objects are moved all the time. Algorithms that detect landmarks on the fly will be useful for mapping and navigating such dynamic environments.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox