Please note: Importing new articles from Word documents is currently unavailable. We are working on fixing this issue soon and apologize for any inconvenience.

Boda Li

and 4 more

Assisted and automated driving functions are increasingly deployed to support improved safety, efficiency, and enhance driver experience. However, there are still key technical challenges that need to be overcome, such as the degradation of perception sensor data due to noise factors. The quality of data being generated by sensors can directly impact the planning and control of the vehicle, which can affect the vehicle safety. This work builds on a recently proposed framework, analysing noise factors on automotive LiDAR sensors, and deploys it to camera sensors, focusing on the specific disturbed sensor outputs via a detailed analysis and classification of automotive camera specific noise sources (30 noise factors are identified and classified in this work). Moreover, the noise factor analysis has identified two omnipresent and independent noise factors (i.e. obstruction and windshield distortion). These noise factors have been modelled to generate noisy camera data; their impact on the perception step, based on deep neural networks, has been evaluated when the noise factors are applied independently and simultaneously. It is demonstrated that the performance degradation from the combination of noise factors is not simply the accumulated performance degradation from each single factor, which raises the importance of including the simultaneous analysis of multiple noise factors. Thus, the framework can support and enhance the use of simulation for development and testing of automated vehicles through careful consideration of the noise factors affecting camera data.

Gabriele baris

and 4 more

Recent advances in sensing, electronic, processing, machine learning, and communication technologies are accelerating the development of assisted and automated functions for commercial vehicles. Environmental perception sensor data are processed to generate a correct and complete situational awareness. It is of utmost importance to assess the robustness of the sensor data pipeline, particularly in the case of data degradation in a noisy and variable environment. Sensor data reduction and compression techniques are key for higher levels of driving automation, as there is an expectation that traditional automotive vehicle wired networks will not be able to support the needed sensor datarates (i.e. more than 10 perception sensors, including cameras, LiDARs, and RADARs, generating tens of Gb/s of data). This work proposes for the first time to consider video compression for camera data transmission on vehicle wired networks in the presence of highly noisy data, e.g. partially obstructed camera field of view. The effects are discussed in terms of machine learning vehicle detection accuracy drop, and also visualising how detection performance spatially varies on the frames using the recently introduced metric, the Spatial Recall Index. The presented parametric occlusion noise model is generated to emulate real-world occlusion patterns, whereas compression is based on the well-established AVC/H.264 compression standard. The results demonstrate that the DNN performance are stable when increasing compression despite adding small amounts of noise. However, higher levels of occlusion noise have a higher impact on DNN performance, and when combined with compression, there is a significant decrease in the DNN performance.