Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mathias Kölsch is active.

Publication


Featured researches published by Mathias Kölsch.


Communications of The ACM | 2011

Vision-based hand-gesture applications

Juan P. Wachs; Mathias Kölsch; Helman Stern; Yael Edan

Body posture and finger pointing are a natural modality for human-machine interaction, but first the system must know what its seeing.


international conference on pattern recognition | 2004

Analysis of rotational robustness of hand detection with a Viola-Jones detector

Mathias Kölsch; Matthew Turk

The research described in this paper analyzes the in-plane rotational robustness of the Viola-Jones object detection method when used for hand appearance detection. We determine the rotational bounds for training and detection for achieving undiminished performance without an increase in classifier complexity. The result - up to 15/spl deg/ total - differs from the methods performance on faces (30/spl deg/ total). We found that randomly rotating the training data within these bounds allows for detection rates about one order of magnitude better than those trained on strictly aligned data. The implications of the results effect both savings in training costs as well as increased naturalness and comfort of vision-based hand gesture interfaces.


international conference on mobile and ubiquitous systems: networking and services | 2004

Vision-based interfaces for mobility

Mathias Kölsch; Matthew Turk; Tobias Höllerer

Vision-based user interfaces are a feasible and advantageous modality for wearable computers. To substantiate this claim, we present a robust real-time hand gesture recognition system that is capable of being the sole input provider for a demonstration application. It achieves usability and interactivity even when both the head-worn camera and the object of interest are in motion. We describe a set of general gesture-based interaction styles and explore their characteristics in terms of task suitability and the computer vision algorithms required for their recognition. Preliminary evaluation of our prototype system leads to the conclusion that vision-based interfaces have achieved the maturity necessary to help overcome some limitations of more traditional mobile user interfaces.


IEEE Computer Graphics and Applications | 2006

Multimodal interaction with a wearable augmented reality system

Mathias Kölsch; Ryan Bane; Tobias Höllerer; Matthew Turk

An augmented reality system enhances a mobile users situational awareness and provides new visualization functionality. The custom-built multimodal interface provides access to information encountered in urban environments. In this article, we detail our experiences with various input devices and modalities and discuss their advantages and drawbacks in the context of interaction tasks in mobile computing. We show how we integrated the input channels to use the modalities beneficially and how this enhances the interfaces overall usability


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2003

The Postural Comfort Zone for Reaching Gestures

Mathias Kölsch; Andrew C. Beall; Matthew Turk

We have proposed a method for objective assessment of postural comfort (Kölsch et al., 2003). We defined comfort as the range of postures that is voluntarily assumed despite the availability of other postures. Designing user interfaces within the limits of comfort zones can avert risks associated with unknown alternative use patters of the interface. Here we report on a user study that investigated the comfort zone for free-hand gestures in the horizontal plane at about stomach height. This space is of particular interest to novel technologies such as gesture recognition and virtual reality. The results are in line with previous studies on postural discomfort, but improve on resolution and are not based on subjective, questionnaire-based data acquisition. This study also serves as an example for how to design studies for comfort evaluation.


Computer Vision and Image Understanding | 2007

Guest Editorial: Special issue on vision for human-computer interaction

Mathias Kölsch; Vladimir Pavlovic; Branislav Kisacanin; Thomas S. Huang

We are at the beginning of an unprecedented growth period for computer vision. Although image processing and machine vision have long had established roles in manufacturing and industrial automation, only now are we witnessing an increase in the number of applications relying on image understanding. Computer vision technologies have become more prevalent in the past decade in both the commercial and the consumer markets. Technologyfriendly areas have experienced the most noticeable influx of computer vision techniques to provide more and better services, particularly in the gaming and automotive industries, but also in medicine, security and space exploration. This special issue of Computer Vision and Image Understanding highlights one particularly promising yet challenging task for computer vision: the facilitation of human– computer interaction (HCI). Vision is an appealing input ‘‘device’’ owing to its non-invasiveness, its small form factor while potentially observing a large space, its ubiquity and its software-based configurability. Vision-based interfaces (VBI) offer more natural ways to interact, akin to human–human interaction, and the flexibility to provide disabled computer users with highly specialized means of interaction. The trend is clear, as Professor Matthew Turk, a leading HCI researcher, summarizes: ‘‘There has been growing interest in turning the camera around and using computer vision to look at people, that is, to detect and recognize human faces, track heads, faces, hands, and bodies, analyze facial expression and body movement, and recognize gestures’’ (Communications of the ACM, January 2004/Vol. 47, No. 1). Although vision-based HCI as a research area emerged almost two decades ago, it has now reached a level of maturity that makes it a serious contender for building interaction devices and for implementing interaction means. A much larger and vastly improved array of methods, faster and cheaper computers, better imaging chips and optics, coupled with a more detailed understanding of the human visual system have brought forth such VBIs as the EyeToy, driver drowsiness monitoring, vehicle occupant detection for safe airbag deployment, and surgery guidance.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2003

An Objective Measure for Postural Comfort

Mathias Kölsch; Andrew C. Beall; Matthew Turk

Biomechanics determines the physical range in which humans can move their bodies. Human factors research delineates a subspace in which humans can operate without experiencing musculoskeletal strain, fatigue or discomfort. We claim that there is an even tighter space which we call the comfort zone. It is defined as the range of postures adopted voluntarily — despite the availability of other postures. We introduce a measurable, objective foundation for comfort, which was previously assumed equivalent to the absence of discomfort, a subjective quantity. Interfaces designed outside a users comfort zone can prompt the adoption of alternative use patterns, which are often less favorable because they trade off the unnoticeable potential of injury for comfort. Designing interfaces within the limits of comfort zones can avert these risks.


advanced concepts for intelligent vision systems | 2010

An Appearance-Based Prior for Hand Tracking

Mathias Kölsch

Reliable hand detection and tracking in passive 2D video still remains a challenge. Yet the consumer market for gesture-based interaction is expanding rapidly and surveillance systems that can deduce fine-grained human activities involving hand and arm postures are in high demand. In this paper, we present a hand tracking method that does not require reliable detection. We built it on top of “Flocks of Features” which combines grey-level optical flow, a “flocking” constraint, and a learned foreground color distribution. By adding probabilistic (instead of binary classified) detections based on grey-level appearance as an additional image cue, we show improved tracking performance despite rapid hand movements and posture changes. This helps overcome tracking difficulties in texture-rich and skin-colored environments, improving performance on a 10-minute collection of video clips from 75% to 86% (see examples on our website).


Journal of Field Robotics | 2013

Variable Resolution Search with Quadrotors: Theory and Practice

Stefano Carpin; Derek Burch; Nicola Basilico; Timothy H. Chung; Mathias Kölsch

This paper presents a variable resolution framework for autonomously searching stationary targets in a bounded area. Theoretical formulations are also described for using a probabilistic quadtree data structure, which incorporates imperfect Bayesian (false positive and false negative) detections and informs the searchers route based on optimizing information gain. Live-fly field experimentation results using a quadrotor unmanned aerial vehicle validate the proposed methodologies and demonstrate an integrated system with autonomous control and embedded object detection for probabilistic search in realistic operational settings. Lessons learned from these field trials include characterization of altitude-dependent detection performance, and we also present a benchmark data set of outdoor aerial imagery for search and detection applications.


Archive | 2009

Hardware Considerations for Embedded Vision Systems

Mathias Kölsch; Steven Butner

Image processing and computer vision do not start with a frame in the frame buffer. Embedded vision systems need to consider the entire real-time vision pipeline from image acquisition to result output, including the operations that are to be performed on the images. This chapter gives an overview of this pipeline and the involved hardware components. It discusses several types of image sensors as well as their readout styles, speeds, and interface styles. Interconnection options for such sensors are presented with low-voltage differential signaling highlighted due to performance and prevalence. Typical image operations are overviewed in the context of an embedded system containing one or more sensors and their interfaces. Several hardware storage and processing components (including DSPs, various systemon- a-chip combinations, FPGAs, GPUs, and memories) are explained as building blocks from which a vision system might be realized. Component-to-component relationships, data and control pathways, and signaling methods between and among these components are discussed, and specific organizational approaches are compared and contrasted.

Collaboration


Dive into the Mathias Kölsch's collaboration.

Top Co-Authors

Avatar

Matthew Turk

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amela Sadagic

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neil C. Rowe

Naval Postgraduate School

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kyoungmin Lee

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge