By Kobi Marenko
At Arbe, we’re developing the world’s first ultra high-resolution 4D imaging radar, a “missing link” technology in the L3 and higher autonomous vehicle evolution. We recently sat down with our CEO, Kobi Marenko, to review the significant implications of this new technology.
Currently, most autonomous vehicle sensing suites include two or three types of sensors: camera, radar and in some cases Lidar. The reason several technologies are being used is that each has strengths and each has weaknesses. You can’t rely on any of them independently. For example, cameras deliver 2D resolution and Lidar 3D resolution, both lose functionality in common environmental conditions such as darkness, pollution, snow, rain, or fog.Radar, which is based on radio waves, maintains functionality across all weather and lighting conditions. However, the technology has been limited by low resolution, a disadvantage that has made radar very susceptible to false alarms and inept at identifying stationary objects. Until now, that is.
What we’ve been able to do at Arbe is to remove radar’s resolution limitation, infusing this super dependable technology with ultra high-resolution functionalities to sense the environment in four dimensions: distance, height, depth and speed. In the autonomous driving industry, this technological advancement effectively repositions radar from a supportive role, to the backbone of the sensor suite.
OEMs are preparing to ramp up Level 2.5 and Level 3 autonomous vehicle production, a move that requires safety-critical functions transfer from the driver to the vehicle. This advancement cannot happen without a sensor that can instantaneously respond to the full range of driving scenarios, identify and assess risk, execute path planning, while offering a non-irritating driving experience for both the driver and those sharing the road.
This leaves us with two problems. First, we haven’t had a sensor suite capable of that level of performance. Second, many sensor suites rely on Lidar, which is quite expensive, at least 10x more expensive than radar. So, the price point would limit ADAS availability to premium and luxury vehicles.
Arbe’s Phoenix technology solves both of these problems. We’ve managed to produce an affordable sensor robust enough for ADAS and autonomous driving. We’ve been able to do this primarily through the development of a proprietary chipset technology that delivers a highly sensitive imaging radar that can identify and track objects small to large, moving and not moving.
For example, we can identify pedestrians, bikes, and motorcycles, and separate them from vehicles and environment objects – even when they are somewhat concealed by them. We believe Phoenix is a game changer.
Our proprietary chipset is the first in the industry to leverage the advanced 22nm RF CMOS process, allowing for a breakthrough in radar performance. We are now delivering an image 100 times more detailed via higher resolution sensing. We can reduce false alarms through advanced algorithms and innovative antenna design. In addition, we separate small and large objects through a high-dynamic range, and provide clear boundaries of stationary and moving objects
By using the 22nm RF CMOS process, Phoenix dramatically reduces costs per radar channel, while consuming the lowest power per channel in the industry.
One of the most significant obstacles to achieving ultra high-resolution has been the amount of processing power required for the analysis of enormous amounts of information. To counter this, Arbe made the strategic decision to develop our own proprietary radar processing on a chip.
Arbe offers unparalleled high-resolution via our proprietary chipset technology. Most radars on the market are based on 12 virtual channel chips (7 physical). With Phoenix, we support more than 2000 virtual channels. Rather than rely on synthetic or statistical resolution enhancements like super resolution, we leverage our antenna array to provide physical angular resolution of 1° in azimuth and 2° in elevation, tracking hundreds of objects simultaneously in a wide field of view at long range at 30 frames per second of full scan.
Think of it as a camera picture with an ultra high pixel count, adding rage and velocity tracking to every pixel. Phoenix’s level of physical resolution is several orders of magnitude higher than incumbent solutions. Because of this, Phoenix works well in low SNR (signal to noise ratio) and in multiple-object scenarios – scenarios like dense urban driving that generally cause other radar methods to fail.
Arbe developed the first radar that separates objects by elevation in high-resolution
Our radar’s capabilities are not compromised when a vehicle goes in and out of an underground parking lot, or up and down a hill. Further, Phoenix can identify and assess objects at various elevation ranges and plan the route accordingly. It detects and responds appropriately to non-obtrusive objects such as manhole covers, or over-hanging signage. It identifies and brakes for stationary objects in the vehicle’s lane, even if they are under bridges or in dark tunnels, scenarios that present primary challenges to current ADAS sensor suites.
Also, elevation perception significantly simplifies the fusion of radar data with visual data from cameras. Because both sensors now share two dimensions – azimuth and elevation.
Low resolution automotive radars rely on the measurement of Doppler velocity to sense the environment. As a result, their ability to detect stationary objects is very limited. We have seen several accidents where autonomous vehicles have hit fences and even parked trucks in their lane because of this limitation.
Arbe, on the other hand, relies on high-resolution in all four dimensions simultaneously, and not just Doppler. As a result, our radar can track all objects regardless of their velocity, including stationary objects, and correctly separates stationary objects from false alarms. This is a real breakthrough.
This is really important. Radars tend to suffer from false alarms. These false alarms can cause radar to report phantom objects – objects that are not really there. Phantom objects can result in two possible scenarios
1) False positives: the radar can cause the car to stop in order to avoid hitting an object it thinks is there, but isn’t really there. Which is annoying at best and dangerous at worst.
2) False negatives: in the quest to eliminate false positives, providers may choose to increase the detection threshold to avoid phantom alerts. When this happens, radar runs the risk of losing its ability to actually identify smaller objects like pedestrians and bikes, or stationary objects.
At Arbe, we have dramatically reduced occurrences of false alarms with close to zero instances of phantom objects, and therefore eliminate both false positive and false negative scenarios. We’ve achieved this through FMCW enhancement, superior channel separation, as well as advanced post processing.
Our R&D team is developing Arbe’s patented radar processing digital chip designed to process massive amounts of raw data, perform advanced radar control, including interference mitigation, auto-focus, automatic gain control, safety, and security. We’ll be moving our post processing onto the digital chip to perform real-time clustering, tracking, localization, and false target filtering.
Our customer facing team is delivering our product to OEMs, new mobility players and Tier 1 providers and collecting market feedback, as we plan for mass production.
We’re very excited about the opportunities ahead.
Connect to learn more
© Arbe , All rights reserved