ADASAS

Thesis 2012 / Advanced Driver Assistance System-Assistance System

ADASAS (Advanced Driver Assistance System-Assistance System)

ADASAS is a vehicle safety system utilizing computer vision, digital implementations, and a head-up display. The project started as a study of innovative solutions, combining hardware and software engineering with human sensory abilities and is now focused on providing advanced driver assistance.

Augmented Reality on Head-Up Display

The emergence of conceptual car technologies using HUD(Head-up display) and AR(Augmented reality) by most major vehicle companies, paves the way for futuristic vehicle technology that will be evolving over the upcoming years. In this stage of major changes, we are in the process of converting digital information to the real world: this causes a sensory gap between a driver and technology. ADASAS’s tracking solution utilizes both HUD and AR technologies to effectively reduces and minimizes sensory disconnection between a driver and information.

 

Face Tracker

This high-performance face-detector well holds and follows audience’s face position in any capturing space, With JASON SARAGIH’s Facetracker, a distance length of eyes is calculated that is used for figuring out how far the audience is from the camera. The Head-up display information can be relocated by the position of the face and zoom-in and zoom-out by the length of eyes.

Embeding (Virtual) Augmented Reality and Deer Detection

To demonstrate the effectiveness of a driver’s sight and position tracking system, with augmented reality system, on a head-up display, two test cubes are located in virtual three-dimensional space. Those cube represented a space of a restaurant, a store or a gas station in real world.

+ Adding Deer Detection

Drivers receive information from their surroundings through their senses of sight and hearing. Sight is accountable for 80~90% of actionable data, so vision is one of the most adaptable sensors when combined with technology. As computer vision technology is improving at its incredible accuracy, many vehicle systems have adopted camera vision for detecting possible dangers such as pedestrians and cars in blind spots. Utilizing vision technology, ADASAS enables animal detection, preventing accidents and limiting roadkill.

Perspective, Viewpoint, Human and Machine

Changing perspective only by a driver’s position doesn’t work to match on a head up display. By driver’s tiny movement, all projected information on HUD locates wrong position, AR information becomes useless. On the previous prototype, only driver’s viewpoint and perspective was concerned. Not only human’s viewpoint, but also viewpoint of the machine, screen or HUD, have to be concerned. That information panning movement doesn’t work with HUD, because there are two pivots that have to be calculated. One is a driver’s viewpoint that isn’t fixed on absolute point and another is that the center pivot of the screen, the contact point between screen and a straight line to vanishing point from the driver. The center of information should follow a driver’s viewpoint, also the distance between two pivots would calculate the scale of information. Consequently, a farther object moves more than a closer object. And the object is projects smaller when the driver get closer to the screen.