Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Florian van de Camp is active.

Publication


Featured researches published by Florian van de Camp.


interactive tabletops and surfaces | 2009

Extending touch: towards interaction with large-scale surfaces

Alexander Schick; Florian van de Camp; Joris Ijsselmuiden; Rainer Stiefelhagen

Touch is a very intuitive modality for interacting with objects displayed on arbitrary surfaces. However, when using touch for large-scale surfaces, not every point is reachable. Therefore, an extension is required that keeps the intuitivity provided by touch: pointing. We will present our system that allows both input modalities in one single framework. Our method is based on 3D reconstruction, using standard RGB cameras only, and allows seamless switching between touch and pointing, even while interacting. Our approach scales very well with large surfaces without modifying them. We present a technical evaluation of the systems accuracy, as well as a user study. We found that users preferred our system to a touch-only system, because they had more freedom during interaction and could solve the presented task significantly faster.


international conference on 3d vision | 2014

Real Time Head Model Creation and Head Pose Estimation on Consumer Depth Cameras

Manuel Martin; Florian van de Camp; Rainer Stiefelhagen

Head pose estimation is an important part of the human perception and is therefore also relevant to make interaction with computer systems more natural. However, accurate estimation of the pose in a wide range is a challenging computer vision problem. We present an accurate approach for head pose estimation on consumer depth cameras that works in a wide pose range without prior knowledge about the tracked person and without prior training of a detector. Our algorithm builds and registers a 3D head model with the iterative closest point algorithm. To track the head pose using this head model an initialization with a known pose is necessary. Instead of providing such an initialization manually we determine the initial pose using features of the head and improve this pose over time. An evaluation shows that our algorithm works in real time with limited resources and achieves superior accuracy compared to other state of the art systems. Our main contribution is the combination of features of the head and the head model generation to build a detector that gives accurate results in a wide pose range.


international conference on distributed smart cameras | 2009

Person tracking in camera networks using graph-based bayesian inference

Florian van de Camp; Keni Bernardin; Rainer Stiefelhagen

In this paper, a probabilistic approach for tracking multiple persons through a network of distributed cameras is presented. The approach deals with the main problems associated with the tracking of persons through wide area networks - bridging large observation gaps between camera views and reidentifying persons - by building on robust and view-invariant high-level features as well as a highly error-tolerant probabilistic filtering of person locations. The extraction quality and discriminative power of the proposed features is evaluated on realistic data including well-known and established benchmark datasets. A comparative performance analysis is then made to assess the accuracy of the probabilistic inter-camera tracking method given a number of different simulated and real quality levels of the underlying person detection and feature extraction components. The experiments are made using a simulated virtual environment involving multiple persons in an indoor surveillance scenario.


international conference on distributed ambient and pervasive interactions | 2013

How to Click in Mid-Air

Florian van de Camp; Alexander Schick; Rainer Stiefelhagen

In this paper, we investigate interactions with distant interfaces. In particular, we focus on how to issue mouse click like commands in mid-air and we propose a taxonomy for distant one-arm clicking gestures. The gestures are divided into three main groups based on the part of the arm that is responsible for the gesture: the fingers, the hand, or the arm. We evaluated nine specific gestures in a Wizard of Oz study and asked participants to rate each gesture using a TLX questionnaire as well as to give an overall ranking. Based on the evaluation, we identified groups of gestures of varying acceptability that can serve as a reference for interface designers to select the most suitable gesture.


intelligent user interfaces | 2013

GlueTK: a framework for multi-modal, multi-display human-machine-interaction

Florian van de Camp; Rainer Stiefelhagen

As new input modalities allow interaction not only in front of a single display, but enable interaction in the whole room, application developers face new challenges. They have to handle many new input modalities, each with its own interface and requirements for pre-processing, deal with multiple displays, and applications that are distributed across multiple machines. We present glueTK, a framework that abstracts from the complexities of these input modalities, allows the design of interfaces for a wide range of display sizes, and makes the distribution across multiple machines transparent to the developer as well as the user. With an example application we demonstrate the wide range of input modalities glueTK can support and the functionality this enables. GlueTK moves away from the focus on point and touch like input modalities, enabling the design of applications tailored towards interactive rooms instead of the traditional desktop environment.


international conference on human-computer interaction | 2017

Interaction with Three Dimensional Objects on Diverse Input and Output Devices: A Survey

Adrian Heinrich Hoppe; Florian van de Camp; Rainer Stiefelhagen

With the emerging technologies of Virtual and Augmented Reality (VR/AR) and the increasing performance of mobile and desktop devices the amount of 3D-based applications rapidly increases. This 3D content demands for an efficient and well suited 3D interaction. Currently there are many manipulation techniques for different input and output devices, like mouse, touchscreen, gestures, 2D-based monitors or 3D-based head mounted displays (HMDs), but there is no general overview covering all interaction techniques. This paper delivers an extensive overview of different approaches and classifies these according to input device, functionality (translation, rotation, scaling, with discrete mode or modeless interaction, uni- or bi-manual). If available, evaluation results or comparisons to other techniques are presented. Each technique is then rated under the aspects of speed, beginner-friendliness and mental and physical demand.


international conference on human computer interaction | 2016

Combining Low-Cost Eye Trackers for Dual Monitor Eye Tracking

Sebastian Balthasar; Manuel Martin; Florian van de Camp; Jutta Hild; Jürgen Beyerer

The increasing use of multiple screens in everyday use creates a demand for multi-monitor eye tracking. Current solutions are complex and for many use cases prohibitively expensive. By combining two, low-cost single monitor eye trackers, we have created a dual monitor eye tracker requiring only minor software modifications from the single monitor version. The results of a user study, which compares the same eye trackers in a single monitor and a dual monitor setup, show that the combined system can accurately estimate the users gaze across two screens. The presented approach gives insight into low-cost alternatives for multi-monitor eye tracking and provides a basis for more complex setups, incorporating even more screens.


Proceedings of the 1st ACM international workshop on Multimodal pervasive video analysis | 2010

Efficient person identification using active cameras in a smartroom

Florian van de Camp; Michael Voit; Rainer Stiefelhagen

Identifying people is an important task in a Smartroom environment. Active cameras are well suited for the task as they provide high resolution images at almost any location in the room. Since active cameras only observe a small part of the field of view they are capable of, it is important to schedule their movement to efficiently use them for face identification. An effective way to schedule cameras would be to always steer them towards persons currently looking at the camera. To realize this, we utilize the headpose estimation component of our smartroom to schedule the active cameras. To overcome the problems associated with evaluating active camera setups, we propose an evaluation methodology that allows for repetition of experiments without invalidating the comparability of the results. The conducted experiments show a significant improvement in the number of face detections in the view of the active cameras utilizing a headpose based scheduling strategy compared to a less dynamic baseline scheduler


international conference on human-computer interaction | 2015

Learning to Juggle in an Interactive Virtual Reality Environment

Tobias Kahlert; Florian van de Camp; Rainer Stiefelhagen

Virtual reality environments are great tools for training as they are very cheap compared to on-site training for many tasks. While the focus has mostly been on the visual experience, we present a system that combines real world interactions with the virtual visual world to train motor skills that are applicable to the real world. Body pose tracking is combined with an Oculus Rift to create such an interactive virtual environment. As an example application, we taught users to juggle using a virtual training course. A third of the users were able to immediately transfer the newly acquired skills and juggle with real balls.


emerging technologies and factory automation | 2014

A flexible context-aware assistance system for industrial applications using camera based localization

Jahanzaib Imtiaz; Nils Koch; Holger Flatt; Jürgen Jasperneite; Michael Voit; Florian van de Camp

Within a manufacturing system, a smart human-machine interface reduces the chances of human error and helps users to make informed decisions, especially in critical situations. This paper presents a concept for a flexible context specific assistance system for industrial applications using camera based localization. As a central element, a modular and service oriented context aware system aggregates relevant data from different sources. An object recognition service applies image analysis techniques on a video stream, captured from a top mounted multi-camera system to detect a persons location and associate it with a mobile device. A semantically enriched OPC UA server provides access to process data, and a web-service provides connection to a user interface hosted on a tablet PC. A case study provides an application of the proposed solution for the maintenance support and indoor navigation is implemented as a proof of concept.

Collaboration


Dive into the Florian van de Camp's collaboration.

Top Co-Authors

Avatar

Rainer Stiefelhagen

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Adrian Heinrich Hoppe

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Voit

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Roland Reeb

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dennis Gill

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gunnar Strentzsch

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jahanzaib Imtiaz

Ostwestfalen-Lippe University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Keni Bernardin

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Leonard Otto

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick Schührer

Karlsruhe University of Applied Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge