

Relying on the Doppler effect, a radar has an inherent ability to accurately measure the relative velocity of the detected objects, and therefore to easily discriminate between the dynamic and the static objects it detects.
Adversarial network radar driver#
Radar is used in today’s driver assistance systems to determine range, velocity, azimuth angle, and elevation angle of objects in the vehicle surroundings. Radar sensors, on the other hand, fulfill the robustness requirement but have considerably lower precision than cameras. For example, camera systems produce precise images of the environment but are very sensitive under, among other things, poor lighting conditions. Most sensors are particularly suited to fulfilling either one or the other of these requirements, but none is ultimately able to fulfill both. Precision describes the sensor’s ability to reproduce the measurements, while robustness requires that a sensor maintain its measurement accuracy. A sensor in such systems is required to maintain its precision and robustness under different, often adverse, system environment conditions. Radar (Radio Detection and Ranging) sensors and cameras are a crucial part of the sensor setup of the driver assistance and the highly automated driving systems. The essential point of the work is the proposal of a novel Conditional Multi-Generator Generative Adversarial Network (CMGGAN) that, being conditioned on the radar sensor measurements, can produce visually appealing images that qualitatively and quantitatively contain all environment features detected by the radar sensor. Through such data fusion, the algorithm produces more consistent, accurate, and useful information than that provided solely by the radar or the camera. A proposed fully-unsupervised machine learning algorithm converts the radar sensor data to artificial, camera-like, environmental images. This paper presents a method for fusing the radar sensor measurements with the camera images.

On the other hand, without a large amount of high-quality labeled training data, it is difficult, if not impossible, to ensure that the supervised machine learning models can predict, classify, or otherwise analyze the phenomenon of interest with the required accuracy. Low sensor precision causes ambiguity in the human interpretation of the measurement data and makes the data labeling process difficult and expensive. However, radar sensors have considerably lower precision than cameras. Largely owing to this reputation, they have found broad application in driver assistance and highly automated driving systems. Radar sensors are considered to be very robust under harsh weather and poor lighting conditions.
