Methods for camera- and head-up-display-based driver-car communication
Autoren
Mehr zum Buch
With more and more cars on the road and an increasing amount of information presented to the driver by the in-car infotainment system, the need of an efficient and safe driver-car-communication arises. In this thesis, three new communication techniques are evaluated which follow this premise “keep the driver's hands on the steering wheel and the eyes on the road”. Thereby, cameras are used as input and the Head-up Display (HUD) as output device. For an indirect input method, thumb gestures performed on the steering wheel are detected and classified using a deep convolutional recurrent neural network. A novel pretraining step is introduced that enables a network training with a relatively small training dataset. In an offline validation, the resulting network achieves a detection rate of 75.3 %. A novel gaze-based interaction which is based on smooth pursuit eye movements is developed as a direct input method. With this method, the driver is able to “click” anywhere on the HUD without using the hands. In laboratory evaluations this method has proven to be significantly faster and less error prone than the state-of-the-art. Furthermore, the smooth pursuit eye movements can be utilized to increase the local tracker accuracy by 25 %. Finally, a visual guidance system is presented and evaluated that mitigates change blindness in heavy rain driving conditions. To do so, the motion transients of undetected changes are presented to the driver on a Contact-analogue Head-up Display. In a driving simulator study, this system has proven to significantly reduce the accident risk in rainy conditions.