Multi sensor data fusion for detection and tracking of moving objects from a dynamic autonomous vehicle

Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.

Data and Resources

Additional Info

Field Value
Source https://theses.hal.science/tel-00858441
Author Baig, Qadeer
Maintainer CCSD
Last Updated May 9, 2026, 20:55 (UTC)
Created May 9, 2026, 20:55 (UTC)
Identifier NNT: 2012GRENM008
Language fr
Rights https://about.hal.science/hal-authorisation-v1/
contributor Laboratoire d'Informatique de Grenoble (LIG) ; Université Pierre Mendès France - Grenoble 2 (UPMF)-Université Joseph Fourier - Grenoble 1 (UJF)-Institut polytechnique de Grenoble - Grenoble Institute of Technology (Grenoble INP)-Institut National Polytechnique de Grenoble (INPG)-Centre National de la Recherche Scientifique (CNRS)
creator Baig, Qadeer
date 2012-02-29T00:00:00
harvest_object_id b69ae437-957b-48c0-96f4-2d3fc4c2aa87
harvest_source_id 3374d638-d20b-4672-ba96-a23232d55657
harvest_source_title test moissonnage SELUNE
metadata_modified 2026-05-04T00:00:00
set_spec type:THESE