Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rick Kjeldsen is active.

Publication


Featured researches published by Rick Kjeldsen.


international conference on automatic face and gesture recognition | 1996

Finding skin in color images

Rick Kjeldsen; John R. Kender

This paper describes the techniques used to separate the hand from a cluttered background in a gesture recognition system. Target colors are identified using a histogram-like structure called a Color Predicate, which is trained in real-time using a novel algorithm. Running on standard PC hardware, the segmentation is of sufficient speed and quality to support an interactive user interface. The method has shown its flexibility in a range of different office environments, segmenting users with many different skin-tones. Variations have been applied to other problems including finding face candidates in video sequences.


international conference on automatic face and gesture recognition | 1996

Toward the use of gesture in traditional user interfaces

Rick Kjeldsen; John R. Kender

This work describes the design of a functioning user interface based on visual recognition of hand gestures, and details its performance. In the interface, gesture replaces the mouse for many actions including selecting, moving and resizing windows. A camera below the screen observes the user. The hand is segmented from the background using color. Features of the hands motion are extracted from the sequence of segmented images, and when needed the hands pose is classified using a neural net. This information is parsed by a task specific grammar. The system runs in real time on standard PC hardware. It has demonstrated its abilities with various users in several different office environments. Having experimented with a functioning gestural interface, the authors discuss the practicality and best applications of this technology.


pervasive computing and communications | 2003

Steerable interfaces for pervasive computing spaces

Gopal Pingali; Claudio S. Pinhanez; Anthony Levas; Rick Kjeldsen; Mark Podlaseck; Han Chen; Noi Sukaviriya

This paper introduces a new class of interactive interfaces that can be moved around to appear on ordinary objects and surfaces anywhere in a space. By dynamically adapting the form, function, and location of an interface to suit the context of the user, such steerable interfaces have the potential to offer radically new and powerful styles of interaction in intelligent pervasive computing spaces. We propose defining characteristics of steerable interfaces and present the first steerable interface system that combines projection, gesture recognition, user tracking, environment modeling and geometric reasoning components within a system architecture. Our work suggests that there is great promise and rich potential for further research on steerable interfaces.


international conference on computer vision systems | 2003

Dynamically reconfigurable vision-based user interfaces

Rick Kjeldsen; Anthony Levas; Claudio S. Pinhanez

A significant problem with vision-based user interfaces is that they are typically developed and tuned for one specific configuration - one set of interactions at one location in the world and in image space. This paper describes methods and architecture for a vision system that supports dynamic reconfiguration of interfaces, changing the form and location of the interaction on the fly. We accomplish this by decoupling the functional definition of the interface from the specification of its location in the physical environment and in the camera image. Applications create a user interface by requesting a configuration of predefined widgets. The vision system assembles a tree of image processing components to fulfill the request, using, if necessary, shared computational resources. This interface can be moved to any planar surface in the cameras field of view. We illustrate the power of such a reconfigurable vision-based interaction system in the context of a prototype application involving projected interactive displays.


Ibm Systems Journal | 2005

Improving web accessibility through an enhanced open-source browser

Vicki L. Hanson; Jonathan P. Brezin; Susan Crayne; Simeon Keates; Rick Kjeldsen; John T. Richards; Calvin Swart; Shari Trewin

The accessibilityWorks project provides software enhancements to the MozillaTM, Web browser and allows users to control their browsing environment. Although Web accessibility standards specify markup that must be incorporated for Web pages to be accessible, these standards do not ensure a good experience for all Web users. This paper discusses user controls that facilitate a number of adaptations that can greatly increase the usability of Web pages for a diverse population of users. In addition to transformations that change page presentation, innovations are discussed that enable mouse and keyboard input correction as well as vision-based control for users unable to use their hands for computer input.


international conference on computer vision | 2001

Head gestures for computer control

Rick Kjeldsen

This paper explores the ways in which head gestures can be applied to the user interface. Four categories of gestural task are considered: pointing, continuous control, spatial selection and symbolic selection. For each category, the problem is examined in the abstract, focusing on human factors and an analysis of the task, then solutions are presented which take into consideration sensing constraints and computational efficiency. A hybrid pointer control algorithm is described that is better suited for facial pointing than either pure rate control or pure position control approaches. Variations of the algorithm are described for scrolling and selection tasks. The primary contribution is to address a full range of interactive head gestures using a consistent approach which focuses as much on user and task constraints as on sensing considerations.


international conference on pattern recognition | 1990

Data and model driven foveation

Rick Kjeldsen; Ruud M. Bolle

A general framework for multiresolution visual recognition is introduced. The input is processed simultaneously at a coarse resolution throughout the image and at finer resolution within a small window. An approach for controlling the movement of the high-resolution window is described which allows for the unification of a variety of data and model-driven behavioral paradigms. Three modes have been implemented, one based on large unexplained areas in the data, one on conflicts in the object-model database, and one on a 2D space-filling algorithm. It is argued that this kind of multiresolution processing not only is useful in limiting the computational time, but also can be a deciding factor in making the entire vision problem a tractable and stable one. To demonstrate the approach, a class of 3D surface textures is introduced as a feature for recognition in the system considered. Surface texture recognition typically requires higher-resolution processing than required for the extraction of the underlying surface. As an example, surface texture is used to discriminate between a ping-pong ball and a golf ball.<<ETX>>


workshop on perceptive user interfaces | 2001

Design issues for vision-based computer interaction systems

Rick Kjeldsen; Jacob Hartman

Computer Vision and other direct sensing technologies have progressed to the point where we can detect many aspects of a users activity reliably and in real time. Simply recognizing the activity is not enough, however. If perceptual interaction is going to become a part of the user interface, we must turn our attention to the tasks we wish to perform and methods to effectively perform them.This paper attempts to further our understanding of vision-based interaction by looking at the steps involved in building practical systems, giving examples from several existing systems. We classify the types of tasks well suited to this type of interaction as pointing, control or selection, and discuss interaction techniques for each class. We address the factors affecting the selection of the control action, and various types of control signals that can be extracted from visual input. We present our design for widgets to perform different types of tasks, and techniques, similar to those used with established user interface devices, to give the user the type of control they need to perform the task well. We look at ways to combine individual widgets into Visual Interfaces that allow the user to perform these tasks both concurrently and sequentially.


international conference on multimedia and expo | 2002

User-following displays

Gopal Pingali; Claudio S. Pinhanez; Tony Levas; Rick Kjeldsen; Mark Podlaseck

Traditionally, a user has positioned himself/herself to be in front of a display in order to access information from it. In this information age, life at work and even at home is often confined to be in front of a display device that is the source of information or entertainment. The paper introduces another paradigm where the display follows the user rather than the user being tied to the display. We demonstrate how steerable projection and people tracking can be combined to achieve a display that automatically follows the user.


conference on computers and accessibility | 2006

Improvements in vision-based pointer control

Rick Kjeldsen

Vision-based head trackers have been around for some years and are even beginning to be commercialized, but problems remain with respect to usability. Users without the ability to use traditional pointing devices - the intended audience of such systems - have no alternative if the automatic boot strapping process fails, there is room for improvement in face tracking, and the pointer movement dynamics do not support accurate and efficient pointing. This paper describes a novel head tracking pointer that addresses these problems.

Researchain Logo
Decentralizing Knowledge