Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lee W. Campbell is active.

Publication


Featured researches published by Lee W. Campbell.


human factors in computing systems | 1999

Embodiment in conversational interfaces: Rea

Justine Cassell; Timothy W. Bickmore; Mark Billinghurst; Lee W. Campbell; K. Chang; Hannes Högni Vilhjálmsson; Hao Yan

In this paper, we argue for embodied corrversational charactersas the logical extension of the metaphor of human - computerinteraction as a conversation. We argue that the only way to fullymodel the richness of human I&+ to-face communication is torely on conversational analysis that describes sets ofconversational behaviors as fi~lfilling conversational functions,both interactional and propositional. We demonstrate how toimplement this approach in Rea, an embodied conversational agentthat is capable of both multimodal input understanding and outputgeneration in a limited application domain. Rea supports bothsocial and task-oriented dialogue. We discuss issues that need tobe addressed in creating embodied conversational agents, anddescribe the architecture of the Rea interface.


Teleoperators and Virtual Environments | 1999

The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment

Aaron F. Bobick; Stephen S. Intille; James W. Davis; Freedom Baird; Claudio S. Pinhanez; Lee W. Campbell; Yuri A. Ivanov; Arjan Schütte; Andrew D. Wilson

The KidsRoom is a perceptually-based, interactive, narrative playspace for children. Images, music, narration, light, and sound effects are used to transform a normal childs bedroom into a fantasy land where children are guided through a reactive adventure story. The fully automated system was designed with the following goals: (1) to keep the focus of user action and interaction in the physical and not virtual space; (2) to permit multiple, collaborating people to simultaneously engage in an interactive experience combining both real and virtual objects; (3) to use computer-vision algorithms to identify activity in the space without requiring the participants to wear any special clothing or devices; (4) to use narrative to constrain the perceptual recognition, and to use perceptual recognition to allow participants to drive the narrative; and (5) to create a truly immersive and interactive room environment. We believe the KidsRoom is the first multi-person, fully-automated, interactive, narrative environment ever constructed using non-encumbering sensors. This paper describes the KidsRoom, the technology that makes it work, and the issues that were raised during the systems development.1 A demonstration of the project, which complements the material presented here and includes videos, images, and sounds from each part of the story is available at .


international conference on automatic face and gesture recognition | 1996

Invariant features for 3-D gesture recognition

Lee W. Campbell; David A. Becker; Ali Azarbayejani; Aaron F. Bobick; Alex Pentland

Ten different feature vectors are tested in a gesture recognition task which utilizes 3D data gathered in real-time from stereo video cameras, and HMMs for learning and recognition of gestures. Results indicate velocity features are superior to positional features, and partial rotational invariance is sufficient for good performance.


Knowledge Based Systems | 2001

More than just a pretty face: conversational protocols and the affordances of embodiment

Justine Cassell; Timothy W. Bickmore; Lee W. Campbell; Hannes Högni Vilhjálmsson; Hao Yan

Prior research into embodied interface agents has found that users like them and find them engaging. However, results on the effectiveness of these interfaces for task completion have been mixed. In this paper, we argue that embodiment can serve an even stronger function if system designers use actual human conversational protocols in the design of the interface. Communicative behaviors such as salutations and farewells, conversational turn-taking with interruptions, and describing objects using hand gestures are examples of protocols that all native speakers of a language already know how to perform and can thus be leveraged in an intelligent interface. We discuss how these protocols are integrated into Rea, an embodied, multi-modal interface agent who acts as a real-estate salesperson, and we show why embodiment is required for their successful implementation.


Archive | 1999

Requirements for an Architecture for Embodied Conversational Characters

Justine Cassell; Timothy W. Bickmore; Lee W. Campbell; K. Chang; Hannes Högni Vilhjálmsson; Hao Yan

In this paper we describe the computational and architectural requirements for systems which support real-time multimodal interaction with an embodied conversational character. We argue that the three primary design drivers are real-time multithreaded entrainment, processing of both interactional and propositional information, and an approach based on a functional understanding of human face-to-face conversation. We then present an architecture which meets these requirements and an initial conversational character that we have developed who is capable of increasingly sophisticated multimodal input and output in a limited application domain.


Communications of The ACM | 2000

Perceptual user interfaces: the KidsRoom

Aaron F. Bobick; Stephen S. Intille; James W. Davis; Freedom Baird; Claudio S. Pinhanez; Lee W. Campbell; Yuri A. Ivanov; Arjan Schütte; Andrew D. Wilson

T he KidsRoom is a fully automated and interactive narrative playspace for children developed at the MIT Media Laboratory. Built to explore the design of perceptually based interactive interfaces, the Kids-Room uses computer vision action recognition simultaneously with computerized control of images, video, light, music, sound, and narration to guide children through a storybook adventure. Unlike most previous work in interactive environments, the Kids-Room does not require people in the space to wear any special clothing or hardware, and the KidsRoom can accommodate up to four people simultaneously. The system was designed to use computational perception to keep most interaction in the real, physical space even as participants interacted with virtual characters and scenes. The KidsRoom, designed in the spirit of several popular childrens books, is an interactive childs bedroom that stimulates imagination by responding to actions with images and sound to transform itself into a storybook world. Two of the bedroom walls resemble the real walls in a childs room, complete with real furniture, posters, and windows. The other two walls are large, back-projected video screens used to transform the appearance of the room environment. Four speakers and one amplifier project steerable sound effects, music, and narration into the space. Three video cameras overlooking the space provide input to computer vision people-tracking and action recognition algorithms. Computer-controlled theatrical lighting illuminates the space, and a microphone detects the volume of enthusiastic screams. The room is fully automated. During the story, children interact with objects in the room, with one another, and with virtual creatures projected onto the walls. Perceptual recognition makes it possible for the room to respond to the physical actions of the children by appropriately moving the story forward thereby creating a compelling interactive narrative experience. Conversely, the narrative context of the story makes it easier to develop context-dependent (and therefore more robust) action recognition algorithms. The story developed for the KidsRoom begins with a normal-looking bedroom. Children enter after being told to find out the magic word by asking the talking furniture that speaks when approached. When the children scream the magic word loudly, sounds and images transform the room into a mystical forest. The story narration prods the children to stay in a group and follow a path to a river (see the stone path (a) in the figure). Along the way, they encounter roaring monsters and must hide behind the bed to make the roars …


international conference on computer vision | 1995

Recognition of human body motion using phase space constraints

Lee W. Campbell; Aaron F. Bobick


Embodied conversational agents | 2001

Human conversation as a system framework: designing embodied conversational agents

Justine Cassell; Timothy W. Bickmore; Lee W. Campbell; Hannes Högni Vilhjálmsson; Hao Yan


Archive | 1998

An Architecture for Embodied Conversational Characters

Justine Cassell; Timothy W. Bickmore; Mark Billinghurst; Lee W. Campbell; Ken Chang; Hannes Högni Vilhjálmsson; Huiqiang Yan


Archive | 1998

Design Decisions for Interactive Environments: Evaluating the KidsRoom

Aaron F. Bobick; Stephen Intiile; James W. Davis; Freedom Baird; Claudio S. Pinhanez; Lee W. Campbell; Yuri A. Ivanov; Arian Schiitte; Andrew D. Wilson

Collaboration


Dive into the Lee W. Campbell's collaboration.

Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Justine Cassell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Freedom Baird

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hao Yan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuri A. Ivanov

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge