Intelligent Systems LabUniversidad Carlos III de Madrid

Advanced Driver Assistance Systems

    Nowadays road vehicles play an important role in transportation systems. Being the most commonly used mean of transport. It rises several problems such as traffic jams, high number of accidents and pollution. Advanced Driver Assistance Systems (ADAS) are aimed to improve safety, efficiency and comfort at road transportation by means of information technologies.

    Beyond classic passive and active safety systems (ABS, EPS, etc.), ADAS entail a step forward: they predict and avoid accidents in advance, by warning the driver in case of hazardous situations. Furthermore, they contribute to the ambitious aims set out by UE Horizon 2020 and Vision Zero programs.

    The advances in perception and computing technologies have led to the development of these technologies, some of which are already available in commercial vehicles. LSI works in this field tries to provide added value on this sector, focused in the development of perception and data fusion technologies. The results are endorsed by numerous projects, publications and awards obtained by the group during its years of activity.

    The results obtained during these years of researching activity are represented by the two platforms and the different technologies developed by the laboratory.

IVVI 2.0 Platform

    IVVI 2.0 (an acronym for Intelligent Vehicle based on Visual Information) vehicle is the second research platform of the Intelligent Systems Lab. It is equipped with the most advanced sensors and processing systems for the development and testing of Advanced Driver Assistance Systems.

    While featuring a cutting-edge equipment, IVVI 2.0 has been designed according to the expected future trends in driving assistance systems: all computers, sensors and human-machine interfaces are almost invisibly integrated into the vehicle. The different sensing devices include:

  • A stereo-vision system for road, object (vehicles, pedestrians, etc.) and traffic sign detection and classification in day driving conditions.
  • A far-infrared-range (FIR) camera, mounted on the rear-view mirror, for pedestrian detection in night driving conditions.
  • A multi-layer laser mounted on the front bumper for object detection.
  • A motion sensing input device, placed on the dashboard, for face detection and driver monitoring.
  • A CAN bus communication device, based on a low-cost embedded system, for driver behavior analysis.
  • A GNSS receiver and an Inertial Measurement Unit (IMU), integrated in a platform on the roof of the IVVI vehicle, for providing information about the vehicle ego position and motion.

    Additionally, two new side-looking cameras are expected to be added, in order to detect vehicles (and/or other traffic participants) at roundabouts and junctions. Data from sensors are processed on a computing platform placed in the car boot, which is able to work in real time. Software architecture is based on ROS (Robot Operating System) framework, enabling both low-level and high-level information fusion.

IVVI 2 Showcase
IVVI 2 Inside

IVVI 1.0 Platform

(Completed in 2009)

    IVVI is an acronym for Intelligent Vehicle based on Visual Information, which was the first experimental platform for researching and developing Advance Driver Assistant Systems (ADAS) based on Image Analysis and Computer Vision developed by LSI. Research results obtained on that platform, are currently being implemented in the actual vehicle (IVVI 2.0).

    IVVI included a black & white stereo vision system developed by LSI,  created with progressive scan cameras in order to avoid problems associated to interlaced video, allowing to capture images in movement. There was also a color CCD camera for the detection of traffic signs and other vertical signs.

    Additionally, on-board of the vehicle there was a far-infrared camera that perceived the heat emitted by the obstacles. This enabled the vehicle to detect pedestrians and other vehicles under bad visibility conditions or under nigh-vision conditions.

    Inside the vehicle there is another camera, focused on evaluating the driver's attention. Infrared LEDs provided illumination during night time without disturbing the driver.

    Moreover, a GPS provided information of the vehicle's pose and localization. This allowed every perception subsystem in the vehicle to know the vehicle's heading and speed. Accordingly the image information could be temporally analyzed. This analysis provided a continuous perception of the environment, which enabled the system to react anytime before a dangerous situation.

    On-board of the vehicle, there were two PCs used for image processing and all computational analysis. Also, there was an electronic switch for the video, mouse and keyboard signals. This enabled the operator, sitting in the back seat, to work with both systems simultaneously. Furthermore, a DC/AC power converter connected to the vehicle's battery, supplied the electrical power to the on-board computers and cameras.