Sensors are the heart of the automotive revolution. As cars inch closer to hands-free, eyes-free operation, the industry must insist on the reliability and safety of these sensors. After all, our aim is to eventually transition the responsibility for safe driving from human hands to the intricate technology within the vehicle, and even before that day technology should be able to lead to a world of zero road fatalities.
The next generation of vehicles, with their advanced applications, needs a well-rounded sensor suite to support path planning and avoid obstacles. Radar excels here. It measures speed and depth instantly, unlike cameras that calculate them (causing delays and inaccuracies). This makes radar a vital teammate (and sometimes backup) for cameras or lidar, providing the missing long-range and all-speed perception for a reliable understanding of the environment in every condition. However, to achieve the level of detail needed for true reliability, radar systems required significant development. Essentially, they needed to be built from the ground up with unique capabilities.
Radar systems are categorized based on their channel count, which directly impacts their capabilities. Here’s a breakdown of the three main types:
The type of radar a car uses significantly impacts its functionality. Here’s a breakdown of key features and how each radar type performs.
Stationary Object Detection Range: All three radar types (Traditional, Basic Imaging, and Perception) can detect moving objects using the Doppler effect, which measures the change in frequency of the reflected radar signal. However, traditional radars struggle with identifying stationary objects. Due to their low resolution and limited processing resources, traditional radars can’t distinguish between stationary objects and background clutter so they tend to ignore the stationery objects, leading to potential safety concerns, as the sensor might miss a parked vehicle or a stalled car on the highway. Basic imaging radar offers some improvement with better obstacle detection up to 20 to 80 meters depending on the system; however, it may still struggle with differentiating between closely spaced objects or those on the road, for example a tire next to a guardrail. Perception radars, with their ultra-high resolution, shine in this area. They can reliably detect all stationary objects in a range of 150 meters and beyond, including smaller obstacles on the road.
False Alarms: Traditional radars are prone to frequent false alarms. This can lead to phantom braking, a phenomenon where the car brakes unnecessarily due to a misidentified object. Basic imaging radars experience fewer false alarms than traditional ones, but they still occur. Perception radars virtually eliminate false alarms through their ultra-high resolution and advanced signal processing techniques.
Ambiguities: Ambiguities occur when a radar system misinterprets data. Traditional and basic imaging radars often struggle with ambiguities due to their lower channel counts and simpler processing capabilities. This can result in false alarms, missed detections, or incorrect target information. By utilizing optimal pulse repetition frequency, advanced Doppler processing, sophisticated antenna design, and digital signal processing algorithms, perception radars can effectively eliminate both Doppler and range ambiguities. This enables them to operate reliably in complex environments with multiple targets and interference sources, providing accurate and reliable data for self-driving vehicles.
Object Classification: Traditional and basic imaging radars lack object classification capabilities. They detect an object without distinguishing its type (car, pedestrian, etc.). More advanced imaging radars might offer some classification, but especially at long range, it may struggle to differentiate between object types. Perception radars offer a very detailed point cloud which can be used as input for AI based algorithm that analyzes the input and classify objects like cars, pedestrians, and bicycles, and even perform more detailed classifications (e.g., truck vs. car or bicycles vs. motorcycle).
Free Space Mapping: Free Space Mapping creates a detailed map of the surrounding environment by identifying drivable space, areas without obstacles — information that is essential for safe path planning. This crucial feature for hands-free and eyes-off driving is only available in perception radars.Sensor Fusion: Sensor fusion, where data from cameras and LiDAR is combined with radar information, creates a richer picture of the environment for self-driving cars. Traditional and basic imaging radars provide limited data and require complex processing before fusion, which is time consuming and computationally expensive. Perception radars, on the other hand, generate high-resolution data readily compatible with other sensors.
Software Integration: Traditional radar is not designed for software-defined vehicles, while basic imaging radar has limitations. Perception radars are built for seamless integration with software-defined architectures, enabling vehicles to evolve over time through software updates. These updates can unlock new functionalities, addressing not only anticipated future demands but also unforeseen use cases that may emerge after launch. This allows for new features, bug fixes, improvements to existing features, monetization of new services, and adherence to new regulations.
Achieving true safety and autonomy requires carmakers to meticulously evaluate each sensor’s strengths and how different radar functionalities can enhance the car. While traditional radars lacked the resolution and reliability for advanced features, perception radars represent a groundbreaking leap forward. These cutting-edge systems boast exceptional resolution and reliability, empowering them to significantly improve existing ADAS features, meet the latest braking requirements set by NHTSA’s 2029 AEB standards, and pave the way for the development of advanced hands-free and eyes-off functionalities.
————————————
This blog contains “forward-looking statements” within the meaning of the Securities Act of 1933 and the Securities Exchange Act of 1934, both as amended by the Private Securities Litigation Reform Act of 1995. The words “expect,” “believe,” “estimate,” “intend,” “plan,” “anticipate,” “may,” “should,” “strategy,” “future,” “will,” “project,” “potential” and similar expressions indicate forward-looking statements. Forward-looking statements are predictions, projections and other statements about future events that are based on current expectations and assumptions and, as a result, are subject to risks and uncertainties, including the risk and uncertainties resulting from the October 7th attack upon Israel, conflicts and potential conflicts involving Israel and the effect of the reaction to the war against Hamas on Israeli companies, particularly high tech companies as well as market acceptance of Arbe’s radar processor and Arbe’s radar processor performing in the manner which Arbe anticipates, and the risk and uncertainties described in “Cautionary Note Regarding Forward-Looking Statements,” “Item 5. Operating and Financial Review and Prospects” and “Item 3. Key Information – Risk Factors” Arbe’s Annual Report on Form 20-F/A for the year ended December 31, 2023, which was filed with the Securities and Exchange Commission on March 28, 2024 as well as other documents filed by Arbe with the SEC. Accordingly, you are cautioned not to place undue reliance on these forward-looking statements. Forward-looking statements relate only to the date they were made, and Arbe does not undertake any obligation to update forward-looking statements to reflect events or circumstances after the date they were made except as required by law or applicable regulation. Information contained on, or that can be accessed through, Arbe’s website or any other website or social media is expressly not incorporated by reference into and is not a part of this blog.
Connect to learn more
© Arbe , All rights reserved