The Power of Imaging Radar for Object Orientation

December 3, 2020

By Ben Rathaus

Providing Full Scene Orientation Data for Hundreds of Objects at Zero Latency

The direction of objects in the vehicle’s environment, also referred to as object orientation, is a crucial piece of information required for safe navigation, scene comprehension, route planning, and consistent object tracking. The goal is not only to see a truck in your lane, but also to know exactly where it is headed at zero/minimal latency within the dynamically changing road scene. With autonomous driving, object orientation is critical for countless actions, including safe and successful lane changes, navigation of intersections and roundabouts, and maneuvering through parking lots.

While sensors today do a good job of detecting objects, it is considered nearly impossible to rely on a local sensed image patch alone to estimate an object’s orientation. However, 4D Imaging Radar resolves this problem.

Prevalent solutions rely on cameras for orientation estimation and require multi-frame information which inevitably suffers from latency, or stereo analysis (i.e. more than one camera), which causes a rise in cost and in computational requirements. In contract, inference of object orientation using high resolution radar imagery, which utilizes both spatial and Doppler resolutions, is straightforward, model-based, and computationally economical. With Arbe’s Imaging Radar, it is possible to analyze the full scene, providing orientation data for hundreds of objects at zero latency.

The Role of High Resolution

High resolution radars provide new information and insights that are not available with other sensors. Simultaneous high Doppler and spatial resolutions allow us to extract the Doppler distribution over an object, as well as its micro Doppler, thus indicating its heading direction and orientation. This functionality does not require a multi-frame analysis of each object in order to deduce its orientation and heading.

While point cloud by traditional radars is too sparse for this kind of analysis, with a high enough resolution (spatial and Doppler) it can be achieved at zero latency and with much greater accuracy. Arbe’s imaging radar’s resolution, for example, results in the detection of hundreds of different points along a single vehicle.

Doppler Distribution Patterns

The high resolution in azimuth and Doppler provides us with important Doppler distribution information. When there is a vehicle ahead of us, the radar detects different Doppler measurements on its right and left sides. 

Here are simple examples of simulated Doppler distributions at high angular resolution:

Example 1:

This car is ahead of us. We see the same Doppler on its right and left. 

Example 2:

This is the same car as in Example 1, still moving in the same direction, but situated 5 meters to the right of us. We can see that the right point and left point have different Dopplers. The distribution in Doppler shows the orientation; it is highly accurate since the radar also provides us with the exact position of the car. 

Example 3:

This car is ahead of us driving to the left. We see a positive Doppler on the right because it is getting closer to the radar on our car, the left side has negative Doppler because it is traveling away from our radar. In between, there is a Doppler gradient, and in the middle of the vehicle ahead we have zero Doppler. The change in Doppler distribution provides the vehicle’s heading direction.

Example 4:

This car is driving sideways. We can mathematically show how this distribution is translated into heading direction and orientation estimation, to plan for the dynamic environment ahead of the vehicle..

Example 5:

In this example a car is about to change lanes. It is heading very slightly to the left, a change we can barely see with our eyes. The Doppler measurement is very different, and allows us to immediately see that this car is changing lanes. Doppler resolution is very sensitive.

Micro Doppler – Towards Intent

Up until now we have only regarded the Doppler distribution over objects due to their “rigid-body” motion, implicitly considering the objects as big chunks of material at uniform motion that induces a variation when seen by the radar. Micro-Doppler is a second order effect that is challenging to observe but which nonetheless bears invaluable information for any tracking system. The minute differences in the Doppler signature emanating from the differential motion of sub-regions in a target (such as wheels of a car, or limbs of a pedestrian) may provide additional information about the observed motion (for example, a gradual and slow lane drift) and may even be the very first giveaway about a maneuver that is about to take place but is not fully observable when judging solely by the current bulk-motion of the body. This “giveaway” is equivalent to being able to detect intent — a challenging concept that, to date, no automotive solution can truly deliver. As technology evolves and we have better Doppler resolution, predicting the object’s intent based on its orientation is the next move.

The L Shape Effect 

Beyond enabling the inference of the object orientation from the Doppler distribution over spatial detections from the target, being able to “see” the minimal bounding box that surrounds the object (its L-shape) may serve as a complimentary contribution, allowing us to achieve higher confidence levels in the orientation and heading direction of the object.

Conclusion:

All radars provide depth and Doppler measurements. However, without sufficient spatial resolution, this information cannot be used to determine a reliable object orientation estimation. Achieving an order of 1 degree spatial resolution and an order of hundreds of separate detections per object allow credible statistical inference of dynamic quantities for the object in question, particularly heading direction and orientation. Only Arbe’s Imaging radar is able to achieve this, analyzing the full scene and providing orientation data for hundreds of objects at zero latency.

Connect to learn more