By Ben Rathaus, PhD
Simultaneous processing of the complete scene around a moving vehicle – in all directions, without gaps, without guesses – is a basic prerequisite for achieving the ultimate goal of truly safe transportation. This fact stands in stark opposition to one fundamental human shortcoming: no matter how skilled the driver, and no matter how many mirrors are available in the car (and no matter how many times your mother claimed she had eyes in the back of her head), human beings simply cannot see both ahead and behind at the same time. Even if we could, our brains are not adapted to process so much information while traveling at high speeds. This is one area where autonomous driving sensors and their complementary perception algorithms offer incredible progress – through autonomous sensors, it is possible to gain a complete, continuous, and unified understanding of the surrounding driving environment. Among them, Imaging Radar stands out, offering data in all weather and lighting conditions and at long range to enable perception algorithms and provide advanced sensing no matter the use case.
The addition of a rear-facing radar means more than simply installing a sensor to face backward. It means adjusting “standard” algorithms to be able to recognize the difference between forward and backward motion. It also means that everything – tracking, ego motion, free space mapping, classification – has to happen twice. The real challenge lies in making the whole greater than the sum of its parts, churning all of the data from each sensor together to produce one continuous, unified presentation of the driving scenario, used to support perception algorithms.
Hazards can come from any direction, and can interfere with safe driving even when they come from behind. While the most immediately obvious benefit of a rear-facing radar enables safe and autonomous driving in reverse, the benefits are really highlighted in more complex scenarios. If you are standing still, or need to slow down toward a traffic jam, the ability to sense that a vehicle is trailing too close or gaining too quickly (and at what speed and in what direction) could meaningfully inform your next moves to help avert disaster.
Further, rear-facing radar offers some unique advantages for perception specifically. When merging onto or exiting from a highway, or changing lanes to overtake a car in the next lane, being able to differentiate between drivable and non-drivable space (also known as Free Space Mapping) is critical. If a vehicle is passing and plans to cross back into your lane, the space is free – but will not remain free. Detections gathered by the rear-facing sensor, then, play a critical part in analyzing the evolving scene, and developing a “super power” nearing clairvoyance to accurately anticipate how the driving environment is likely to shift.
Rear-facing radar also improves ego velocity estimations in very heavy traffic, or when driving near a red light switching to green by providing additional contextual data. In both of these scenarios, a clear view of the stationary environment is necessary to calculate ego motion from radar detections. Crowded and slow moving traffic can make it difficult to achieve by front radar alone, causing, for example, slow divergence of the estimated ego velocity from its true value. The more directions the sensors cover, the more diverse data they take in, increasing the number of detections stemming from true stationary objects, thus improving the robustness of the calculation.
Likewise, it provides important complementary information about otherwise-missed obstructions and occlusions now identified thanks to Imaging Radar’s ultra-high resolution in all dimensions. This data, when processed together with data from other radars in a single, more elaborate calculation, significantly improves accuracy. In short, rear-facing Imaging Radar offers a more complete understanding of the driving environment and how it is likely to develop over time, improving decision making and safety.
This kind of full coverage perception will eventually also enable some of the more futuristic – but entirely achievable – autonomous and connected vehicle features. Vehicle to vehicle (V2V) communication will make it possible for vehicles to share information about hazards down the road. For example, Arbe’s Imaging Radar has a range of over 300m. By including a rear-facing radar in every sensor suite, two vehicles (or more!) will be able to share information, effectively extending the range of the one behind to include the range of the vehicle ahead (for a total front-facing range of nearly a kilometer). Similarly to the way smart, shared navigation systems like Waze enable users to warn others behind them of potential obstacles, perception in more than one direction will enable connected, autonomous vehicles to extend their range and make better, more informed, and safer decisions.
Likewise, accurate perception of the complete driving landscape will be required to support high definition radar mapping of radar reflections in our overall environment. By building an accumulating map of all stationary detection from the environment in the highest possible resolution and sharing localization information, it will be possible to create an extended free space map to increase path planning efficiency.
Rear-facing Imaging Radar provides an entirely new level of understanding of the driving environment. When applied to both the front and back of the vehicle, the compounded effect of both the overlapping and the complementary detections makes the Perception Imaging Radar simply indispensable to achieving safety and autonomy.
Connect to learn more
© Arbe , All rights reserved