Advanced Driving Assistance Systems (ADAS) necessitate a precise
knowledge of the environment of the vehicle in order to extract
the necessary information to their operations. This knowledge can be acquired through
sensors such as cameras, RADARs, LIDARs, etc.
This project focuses on the case of an array of cameras surrounding the vehicle,
detecting obstacles, other vehicles, etc.
The relative positions and orientations of the cameras with respect to the vehicle
must be known precisely. However, the geometry of the array can change during
the life of the vehicle, following short or long term deformations
of its mechanical structure. We aim at developing autocalibration methods
for the extrinsic parameters of the camera, i.e. their relative positions and orientations
with respect to the vehicle, ensuring an adequate calibration of the camera array along the life of the vehicle.
The state of the art mainly focuses on the case of one camera, based on known reference
such as chessboard targets [1], vehicle parts visible in the field of view [2} , or
structured environments (lane markings [3], or « Manhattan world » [3]).
The project aims at autocalibration in arbitrary, unstructured environment, such as off-road conditions. Another goal is the quantification of the uncertainties of the
estimation of the parameters.
The developed methods will be based on exploiting the movement of the vehicle, possible overlaps between the cameras’ field of view, computer vision, Bayesian estimation [5] and Kalman filtering on Lie groups [6], allowing uncertainty quantification, etc.
Multimodal calibration, e.g. RADAR + camera calibration, will also be considered, with the goal of performing calibration without known targets [7,8].
This project takes place in the framework of an industrial collaboration between L2S and Forvia.
Duration: 18 months
Location: L2S, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette (Paris-Saclay)
Applications are to be sent to gilles.chardon@centralesupelec.fr.
Starting date: as soon as possible
[1] Z. Xing, J. Yu, and Y. Ma, “A new calibration technique for multi-camera systems of limited overlapping
field-of-views,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
(Vancouver, BC), pp. 5892–5899, IEEE, Sept. 2017.
[2] J. H. Lee and D.-W. Lee, “A Hough-Space-Based Automatic Online Calibration Method for a Side-Rear-
View Monitoring System,” Sensors, vol. 20, p. 3407, Jan. 2020.
[3] Y. Jo, J. Jang, M. Shin, and J. Paik, “Camera orientation estimation using voting approach on the
Gaussian sphere for in-vehicle camera,” Optics Express, vol. 27, pp. 26600–26614, Sept. 2019.
[4] J. Jang, Y. Jo, M. Shin, and J. Paik, “Camera Orientation Estimation Using Motion-Based Vanishing
Point Detection for Advanced Driver-Assistance Systems,” IEEE Transactions on Intelligent Transporta-
tion Systems, vol. 22, pp. 6286–6296, Oct. 2021.
[5] G. Guillet, T. Guillet, and L. Ravanel, “Camera orientation, calibration and inverse perspective with
uncertainties: A Bayesian method applied to area estimation from diverse photographs,” ISPRS Journal
of Photogrammetry and Remote Sensing, vol. 159, pp. 237–255, Jan. 2020.
1[6] A. M. Sjoberg and O. Egeland, “Lie Algebraic Unscented Kalman Filter for Pose Estimation,” IEEE
Transactions on Automatic Control, vol. 67, pp. 4300–4307, Aug. 2022.
[7] L. Cheng, A. Sengupta, and S. Cao, “3d radar and camera co-calibration: A flexible and accurate method
for target-based extrinsic calibration,” in 2023 IEEE Radar Conference (RadarConf23), pp. 1–6, 2023.
[8] S. Agrawal, S. Bhanderi, K. Doycheva, and G. Elger, “Static Multitarget-Based Autocalibration of RGB
Cameras, 3-D Radar, and 3-D Lidar Sensors,” IEEE Sensors Journal, vol. 23, pp. 21493–21505, Sept.
2023.