Annonce

Les commentaires sont clos.

Sensor fusion for assisted driving

26 Avril 2024


Catégorie : Doctorant


Dans le cadre d'une collaboration industrielle avec Forvia, nous proposons ce sujet de thèse sur de la fusion de capteurs pour la conduite automobile assistée, en particulier image (caméra) et RADAR, au L2S (Université Paris-Saclay, CentraleSupélec)

Contact : gilles.chardon@centralesupelec.fr

 

Sensor fusion for assisted driving

The increased availability of onboard sensors and computational power in consumer cars allows more and more Advanced Driver-Assistance Systems (ADAS), enhancing driving safety.

ADAS need a precise knowledge on the vehicle state and its environment. In this PhD project, we will develop sensors fusion methods aiming at exploiting the available information in an efficient way.

A first topic of interest is Camera-RADAR fusion. We assume that vehicles are equipped with
- cameras, imaging the environment of the vehicle
- RADAR sensors, detecting objects, and estimating their distance and radial velocities
relative to the vehicle.

Cameras and RADAR are complementary. Indeed, a camera is sensitive to shape, color, texture, allowing the identification of objects of interest (cars, people, bikes, fixed equipment, etc.), whereas RADAR sensors can estimate their distances and velocities with respect to the vehicle precisely. Fusing this heterogeneous set of information yield an accurate characterization of the environment of the vehicle.

There is a substantial state of the art for this fusion problem [1], with applications such as detection and classification of objects surrounding the vehicle and estimation of their trajectories. However, the methods from the state of the art cannot quantify the uncertainties of the estimated parameters, and are computationally intensive. Based on neural networks, these methods operate on two dimensional bird's eye views, or front views,
while the problem is three dimensional. Additionally, transforming the RADAR data into an image is necessary to use it as an input of a neural network, losing its particular structure.

The originality of the proposed methodology lies in the three dimensional representation of the points of interest that will be used, as well as the exploitation of raw RADAR data. In particular, we will consider optimal transport based methods [3], that are able to process data in heterogeneous modalities. A distribution of points of interest will be estimated by balancing
- the optimal transport cost between the estimated distribution and features extracted from the
image using computer vision methods (convolutional networks, etc.), and
- the fit between the estimated distribution and the measured RADAR data, evaluated using the physical model of RADAR measurements.
Additionally, the dynamics of the environment will be considered, using optimal transport or point cloud tracking algorithms [4].

The expected outcomes are an improvement of the estimation of the environment of the vehicle, a better robustness with respect to RADAR outliers, and reduced computational efforts, made possible by the modelisation of the environment as a sparse cloud of points, which is generally understood as a limitation of RADARs.

Several additional fusion problems are also considered, involving LIDAR, ultrasound sensors, and internal sensors.

Public datasets (ex. nuscenes, radical) and data collected during the PhD will be used to develop the method.

The PhD project will take place at L2S (Université Paris-Saclay, CentraleSupélec, supervision by Gilles Chardon) in the context of an industrial collaboration with Forvia, providing financial and technical support to the project [5].

Profile: The candidate should have an engineering or Master 2 degree in signal or image processing, data science or machine learning.
Contact: Gilles Chardon, gilles.chardon@centralesupelec.fr

[1] S. Yao et al., “Radar-Camera Fusion for Object Detection and Semantic Segmentation in Autonomous Driving: A Comprehensive Review,” IEEE Trans. Intell. Veh., pp. 1–40, 2023, doi: 10.1109/TIV.2023.3307157.
[2] Prendes, J., Chabert, M., Pascal, F., Giros, A., & Tourneret, J. Y. (2016). A Bayesian nonparametric model coupled with a Markov random field for change detection in heterogeneous remote sensing images. SIAM Journal on Imaging Sciences, 9(4), 1889-1921.
[3] F. Elvander, I. Haasler, A. Jakobsson, and J. Karlsson, “Multi-marginal optimal transport using partial information with applications in robust localization and sensor fusion,” Signal Processing, vol. 171, p. 107474, Jun. 2020, doi: 10.1016/j.sigpro.2020.107474.
[4] Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun, “Deep Learning for 3D Point Clouds: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 12, pp. 4338–4364, Dec. 2021, doi: 10.1109/TPAMI.2020.3005434.
[5] https://www.faurecia.com/en/newsroom/forvia-centralsupelec-build-together-future-smart-vehicles