Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerhard Rigoll is active.

Publication


Featured researches published by Gerhard Rigoll.


Pattern Recognition | 2018

A deep convolutional neural network for video sequence background subtraction

Mohammadreza Babaee; Duc Tung Dinh; Gerhard Rigoll

We propose a novel approach based on deep learning for background subtraction from video sequences.A new algorithm to generate background model has been proposed.Input image patches and their corresponding background images are fed into CNN to do background subtraction.We utilized median filter to enhance the segmentation results.Experiments of Change detection results confirm the performance of the proposed approach. In this work, we present a novel background subtraction from video sequences algorithm that uses a deep Convolutional Neural Network (CNN) to perform the segmentation. With this approach, feature engineering and parameter tuning become unnecessary since the network parameters can be learned from data by training a single CNN that can handle various video scenes. Additionally, we propose a new approach to estimate background model from video sequences. For the training of the CNN, we employed randomly 5% video frames and their ground truth segmentations taken from the Change Detection challenge 2014 (CDnet 2014). We also utilized spatial-median filtering as the post-processing of the network outputs. Our method is evaluated with different data-sets, and it (so-called DeepBS) outperforms the existing algorithms with respect to the average ranking over different evaluation metrics announced in CDnet 2014. Furthermore, due to the network architecture, our CNN is capable of real time processing.


human factors in computing systems | 2017

GazeEverywhere: Enabling Gaze-only User Interaction on an Unmodified Desktop PC in Everyday Scenarios

Simon Schenk; Marc Dreiser; Gerhard Rigoll; Michael Dorr

Eye tracking is becoming more and more affordable, and thus gaze has the potential to become a viable input modality for human-computer interaction. We present the GazeEverywhere solution that can replace the mouse with gaze control by adding a transparent layer on top of the system GUI. It comprises three parts: i) the SPOCK interaction method that is based on smooth pursuit eye movements and does not suffer from the Midas touch problem; ii) an online recalibration algorithm that continuously improves gaze-tracking accuracy using the SPOCK target projections as reference points; and iii) an optional hardware setup utilizing head-up display technology to project superimposed dynamic stimuli onto the PC screen where a software modification of the system is not feasible. In validation experiments, we show that GazeEverywheres throughput according to ISO 9241-9 was improved over dwell time based interaction methods and nearly reached trackpad level. Online recalibration reduced interaction target (button) size by about 25%. Finally, a case study showed that users were able to browse the internet and successfully run Wikirace using gaze only, without any plug-ins or other modifications.


ieee virtual reality conference | 2017

A diminished reality simulation for driver-car interaction with transparent cockpits

Patrick Lindemann; Gerhard Rigoll

We anticipate advancements in mixed reality device technology which might benefit driver-car interaction scenarios and present a simulated diminished reality interface for car drivers. It runs in a custom driving simulation and allows drivers to perceive otherwise occluded objects of the environment through the car body. We expect to obtain insights that will be relevant to future real-world applications. We conducted a pre-study with participants performing a driving task with the prototype in a CAVE-like virtual environment. Users preferred large-sized see-through areas over small ones but had differing opinions on the level of transparency to use. In future work, we plan additional evaluations of the driving performance and will further extend the simulation.


automotive user interfaces and interactive vehicular applications | 2017

Examining the Impact of See-Through Cockpits on Driving Performance in a Mixed Reality Prototype

Patrick Lindemann; Gerhard Rigoll

We built and evaluated a see-through cockpit prototype for a driving simulation in a mixed reality environment, simulating an HMD-based interface. Advantages of such a system may include better driving performance, collision avoidance and situation awareness. Early results from driving line data indicate potential for improving lateral control by driving with transparent cockpits and show no difference regarding different levels of transparency. We extended our prototype based on the results and abandoned the head-registered interface in favor of a simulation of a projection-based system targetting specific car parts. We present the current prototype and discuss how it relates to existing proof of concepts and potential future real-world implementations. We plan to evaluate the latest prototype in a larger-scale study to determine its impact on lane-keeping performance. We also want to consider impairments of the real world and plan to evaluate the system with artificially induced head tracking errors.


german conference on pattern recognition | 2017

Improving Facial Landmark Detection via a Super-Resolution Inception Network

Martin Knoche; Daniel Merget; Gerhard Rigoll

Modern convolutional neural networks for facial landmark detection have become increasingly robust against occlusions, lighting conditions and pose variations. With the predictions being close to pixel-accurate in some cases, intuitively, the input resolution should be as high as possible. We verify this intuition by thoroughly analyzing the impact of low image resolution on landmark prediction performance. Indeed, performance degradations are already measurable for faces smaller than (50,times ,50,mathrm {px}). In order to mitigate those degradations, a new super-resolution inception network architecture is developed which outperforms recent super-resolution methods on various data sets. By enhancing low resolution images with our model, we are able to improve upon the state of the art in facial landmark detection.


computer vision and pattern recognition | 2018

Motion Fused Frames: Data Level Fusion Strategy for Hand Gesture Recognition

Okan Köpüklü; Neslihan Kose; Gerhard Rigoll


computer vision and pattern recognition | 2018

Robust Facial Landmark Detection via a Fully-Convolutional Local-Global Context Network

Daniel Merget; Matthias Rock; Gerhard Rigoll


Multimodal Technologies and Interaction | 2018

Catch My Drift: Elevating Situation Awareness for Highly Automated Driving with an Explanatory Windshield Display User Interface

Patrick Lindemann; Tae-Young Lee; Gerhard Rigoll


international conference on image processing | 2017

Multi-view human activity recognition using motion frequency

Neslihan Kase; Mohammadreza Babaee; Gerhard Rigoll


international conference on image processing | 2017

Joint tracking and gait recognition of multiple people in video

Maryam Babaee; Gerhard Rigoll; Mohammadreza Babaee

Collaboration


Dive into the Gerhard Rigoll's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge