Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Claudio S. Pinhanez is active.

Publication


Featured researches published by Claudio S. Pinhanez.


ubiquitous computing | 2001

The Everywhere Displays Projector: A Device to Create Ubiquitous Graphical Interfaces

Claudio S. Pinhanez

This paper introduces the Everywhere Displays projector, a device that uses a rotating mirror to steer the light from an LCD/DLP projector onto different surfaces of an environment. Issues of brightness, oblique projection distortion, focus, obstruction, and display resolution are examined. Solutions to some of these problems are described, together with a plan to use a video camera to allow device-free interaction with the projected images. The ED-projector is a practical way to create ubiquitous graphical interfaces to access computational power and networked data. In particular, it is envisioned as an alternative to the carrying of laptops and to the installation of displays in furniture, objects, and walls. In addition, the use of ED-projectors to augment reality without the use of goggles is examined and illustrated with examples.


Teleoperators and Virtual Environments | 1999

The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment

Aaron F. Bobick; Stephen S. Intille; James W. Davis; Freedom Baird; Claudio S. Pinhanez; Lee W. Campbell; Yuri A. Ivanov; Arjan Schütte; Andrew D. Wilson

The KidsRoom is a perceptually-based, interactive, narrative playspace for children. Images, music, narration, light, and sound effects are used to transform a normal childs bedroom into a fantasy land where children are guided through a reactive adventure story. The fully automated system was designed with the following goals: (1) to keep the focus of user action and interaction in the physical and not virtual space; (2) to permit multiple, collaborating people to simultaneously engage in an interactive experience combining both real and virtual objects; (3) to use computer-vision algorithms to identify activity in the space without requiring the participants to wear any special clothing or devices; (4) to use narrative to constrain the perceptual recognition, and to use perceptual recognition to allow participants to drive the narrative; and (5) to create a truly immersive and interactive room environment. We believe the KidsRoom is the first multi-person, fully-automated, interactive, narrative environment ever constructed using non-encumbering sensors. This paper describes the KidsRoom, the technology that makes it work, and the issues that were raised during the systems development.1 A demonstration of the project, which complements the material presented here and includes videos, images, and sounds from each part of the story is available at .


computer vision and pattern recognition | 1998

Human action detection using PNF propagation of temporal constraints

Claudio S. Pinhanez; Aaron F. Bobick

In this paper we develop a representation for the temporal structure inherent in human actions and demonstrate an effective method for using that representation to detect the occurrence of actions. The temporal structure of the action, sub-actions, events, and sensor information is described using a constraint network based on Allens interval algebra. We map these networks onto a simpler, S-valued domain (past, now, fut) network-a PNF-network-to allow fast detection of actions and sub-actions. The occurrence of an action is computed by considering the minimal domain of its PNF-network, under constraints imposed by the current state of the sensors and the previous states of the network. We illustrate the approach with examples, showing that a major advantage of PNF propagation is the detection and removal of in-consistent situations.


pervasive computing and communications | 2003

Steerable interfaces for pervasive computing spaces

Gopal Pingali; Claudio S. Pinhanez; Anthony Levas; Rick Kjeldsen; Mark Podlaseck; Han Chen; Noi Sukaviriya

This paper introduces a new class of interactive interfaces that can be moved around to appear on ordinary objects and surfaces anywhere in a space. By dynamically adapting the form, function, and location of an interface to suit the context of the user, such steerable interfaces have the potential to offer radically new and powerful styles of interaction in intelligent pervasive computing spaces. We propose defining characteristics of steerable interfaces and present the first steerable interface system that combines projection, gesture recognition, user tracking, environment modeling and geometric reasoning components within a system architecture. Our work suggests that there is great promise and rich potential for further research on steerable interfaces.


international conference on computer vision systems | 2003

Dynamically reconfigurable vision-based user interfaces

Rick Kjeldsen; Anthony Levas; Claudio S. Pinhanez

A significant problem with vision-based user interfaces is that they are typically developed and tuned for one specific configuration - one set of interactions at one location in the world and in image space. This paper describes methods and architecture for a vision system that supports dynamic reconfiguration of interfaces, changing the form and location of the interaction on the fly. We accomplish this by decoupling the functional definition of the interface from the specification of its location in the physical environment and in the camera image. Applications create a user interface by requesting a configuration of predefined widgets. The vision system assembles a tree of image processing components to fulfill the request, using, if necessary, shared computational resources. This interface can be moved to any planar surface in the cameras field of view. We illustrate the power of such a reconfigurable vision-based interaction system in the context of a prototype application involving projected interactive displays.


human factors in computing systems | 1997

Interval scripts: a design paradigm for story-based interactive systems

Claudio S. Pinhanez; Kenji Mase; Aaron F. Bobick

A system to manage human interaction in immersive environments was designed and implemented. The interaction is defined by an interval scripi which describes the relationships between the time intervals which command actuators or gather information from sensors. With this formalism, reactive, linear, and tree-like interaction can be equally described, as well as less regular story and interaction patterns. Control of actuators and sensors is accomplished using PNF-restriction, a calculus which propagates the sensed information through the interval script determining which intervals are or should be happening at each moment. The prototype was used in an immersive, story-based interactive environment called SingSong where a user or a performer tries to conduct four computer character singers in spite of the hostility of one of them.


IEEE Computer | 2003

Fostering a symbiotic handheld environment

Mandayam Thondanur Raghunath; Chandrasekhar Narayanaswami; Claudio S. Pinhanez

Although researchers have already begun building the infrastructure to make a symbiotic environment of handheld systems and related devices possible, business needs will drive this technologys real growth.


human factors in computing systems | 2001

Using a steerable projector and a camera to transform surfaces into interactive displays

Claudio S. Pinhanez

The multi-surface interactive display projector (MSIDP) is a steerable projection system that transforms non-tethered surfaces into interactive displays. In an MSIDP, the display image is directed onto a surface by a rotating mirror. Oblique projection distortions are removed by a computer-graphics reverse-distortion process and user interaction (pointing and clicking) is achieved by detecting hand movements with a video camera. The MSIDP is a generic input/output device to be used in applications that require computer access from different locations of a space or computer action in the real world (such as locating objects). In particular, it can also be used to provide computer access in public spaces and to people with locomotive disabilities.


Communications of The ACM | 2000

Perceptual user interfaces: the KidsRoom

Aaron F. Bobick; Stephen S. Intille; James W. Davis; Freedom Baird; Claudio S. Pinhanez; Lee W. Campbell; Yuri A. Ivanov; Arjan Schütte; Andrew D. Wilson

T he KidsRoom is a fully automated and interactive narrative playspace for children developed at the MIT Media Laboratory. Built to explore the design of perceptually based interactive interfaces, the Kids-Room uses computer vision action recognition simultaneously with computerized control of images, video, light, music, sound, and narration to guide children through a storybook adventure. Unlike most previous work in interactive environments, the Kids-Room does not require people in the space to wear any special clothing or hardware, and the KidsRoom can accommodate up to four people simultaneously. The system was designed to use computational perception to keep most interaction in the real, physical space even as participants interacted with virtual characters and scenes. The KidsRoom, designed in the spirit of several popular childrens books, is an interactive childs bedroom that stimulates imagination by responding to actions with images and sound to transform itself into a storybook world. Two of the bedroom walls resemble the real walls in a childs room, complete with real furniture, posters, and windows. The other two walls are large, back-projected video screens used to transform the appearance of the room environment. Four speakers and one amplifier project steerable sound effects, music, and narration into the space. Three video cameras overlooking the space provide input to computer vision people-tracking and action recognition algorithms. Computer-controlled theatrical lighting illuminates the space, and a microphone detects the volume of enthusiastic screams. The room is fully automated. During the story, children interact with objects in the room, with one another, and with virtual creatures projected onto the walls. Perceptual recognition makes it possible for the room to respond to the physical actions of the children by appropriately moving the story forward thereby creating a compelling interactive narrative experience. Conversely, the narrative context of the story makes it easier to develop context-dependent (and therefore more robust) action recognition algorithms. The story developed for the KidsRoom begins with a normal-looking bedroom. Children enter after being told to find out the magic word by asking the talking furniture that speaks when approached. When the children scream the magic word loudly, sounds and images transform the room into a mystical forest. The story narration prods the children to stay in a group and follow a path to a river (see the stone path (a) in the figure). Along the way, they encounter roaring monsters and must hide behind the bed to make the roars …


ubiquitous computing | 2005

To frame or not to frame: the role and design of frameless displays in ubiquitous applications

Claudio S. Pinhanez; Mark Podlaseck

A frameless display is a display with no perceptible boundaries; it appears to be embodied in the physical world. Frameless displays are created by projecting visual elements on a black background into a physical environment. By considering visual arts and design theory together with our own experience building about a dozen applications, we argue the importance of this technique in creating ubiquitous computer applications that are truly contextualized in the physical world. Nine different examples using frameless displays are described, providing the background for a systematization of frameless displays pros and cons, together with a basic set of usage guidelines. The paper also discusses the differences and constraints on user interaction with visual elements in a frameless display.

Researchain Logo
Decentralizing Knowledge