Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher Richard Wren is active.

Publication


Featured researches published by Christopher Richard Wren.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Pfinder: real-time tracking of the human body

Christopher Richard Wren; Ali Azarbayejani; Trevor Darrell; Alex Pentland

Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding.


ieee international conference on automatic face and gesture recognition | 1998

Dynamic models of human motion

Christopher Richard Wren; Alex Pentland

This paper describes experiments in human motion understanding, defined here as estimation of the physical state of the body (the Plant) combined with interpretation of that part of the motion that cannot be predicted by the plant alone (the Behavior). The described behavior system operates in conjunction with a real-time, fully-dynamic, 3-D person tracking system that provides a mathematically concise formulation for incorporating a wide variety of physical constraints and probabilistic influences. The framework takes the form of a non-linear recursive filter that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models. Results are shown that demonstrate both qualitative and quantitative gains in tracking performance.


People first | 1999

Understanding purposeful human motion

Christopher Richard Wren; Alex Pentland

Human motion can be understood on several levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, 3-D tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We will show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate both qualitative and quantitative gains in tracking performance.


Applied Artificial Intelligence | 1997

Perceptive spaces for performance and entertainment untethered interaction using computer vision and audition

Christopher Richard Wren; Flavia Sparacino; Ali Azarbayejani; Trevor Darrell; Thad Starner; Akira Kotani; Chloe M. Chao; Michal Hlavac; Kenneth B. Russell; Alex Pentland

Bulky head-mounted displays, data gloves, and severely limited movement have become synonymous with virtual environments. This is unfortunate, since virtual environments have such great potential in applications such as entertainment, animation by example, design interface, information browsing, and even expressive performance. In this article, we describe an approach to unencumbered natural interfaces called Perceptive Spaces. The spaces are unencumbered because they utilize passive sensors that do not require special clothing and large format displays that do not isolate the users from their environment. The spaces are natural because the open environment facilitates active participation. Several applications illustrate the expressive power of this approach, as well as the challenges associated with designing these interfaces.


Archive | 2000

Combining Audio and Video in Perceptive Spaces

Christopher Richard Wren; Sumit Basu; Flavia Sparacino; Alex Pentland

Virtual environments have great potential in applications such as entertainment, animation by example, design interface, information browsing, and even expressive performance. In this paper we describe an approach to unencumbered, natural interfaces called Perceptive Spaces with a particular focus on efforts to include true multi-modal interface: interfaces that attend to both the speech and gesture of the user. The spaces are unencumbered because they utilize passive sensors that don’t require special clothing and large format displays that don’t isolate the user from their environment. The spaces are natural because the open environment facilitates active participation. Several applications illustrate the expressive power of this approach, as well as the challenges associated with designing these interfaces.


Multimedia Systems | 2006

Functional calibration for pan-tilt-zoom cameras in hybrid sensor networks

Christopher Richard Wren; Uğur M. Erdem; Ali Azarbayejani

Wide-area context awareness is a crucial enabling technology for next generation smart buildings and surveillance systems. It is not practical to gather this context awareness by covering the entire building with cameras. However, significant gaps in coverage caused by installing cameras in a sparse way can make it very difficult to infer the missing information. As a solution we advocate a class of hybrid perceptual systems that build a comprehensive model of activity in a large space, such as a building, by merging contextual information from a dense network of ultra-lightweight sensor nodes and video from a sparse network of cameras. In this paper we explore the task of automatically recovering the relative geometry between a pan-tilt-zoom camera and a network of one-bit motion detectors. We present results both for the recovery of geometry alone and also for the recovery of geometry jointly with simple activity models. Because we do not believe a metric calibration is necessary, or even entirely useful, for this task, we formulate and pursue the novel goal we term functional calibration. Functional calibration is a blending of geometry estimation and simple behavioral model discovery. Accordingly, results are evaluated by measuring the ability of the system to automatically foveate targets in a large, non-convex space, rather than by measuring, for example, pixel reconstruction error.


Archive | 2001

Methods of establishing a communications link using perceptual sensing of a user's presence

Christopher Richard Wren; Sumit Basu; Evgeniy Gusyatin


Archive | 1996

Real-Time 3-D Tracking of the Human Body

Ali Azarbayejani; Christopher Richard Wren


Archive | 1998

Real-time 3D Motion Capture

Alex Pentland; Christopher Richard Wren; David M. Harwood; Ismail Haritaoglu; Larry S. Davis; Thanarat Horprasert


Archive | 1999

Augmented Performance in Dance and Theater

Flavia Sparacino; Christopher Richard Wren; Glorianna Davenport; Alex Pentland

Collaboration


Dive into the Christopher Richard Wren's collaboration.

Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ali Azarbayejani

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Flavia Sparacino

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Glorianna Davenport

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kenneth B. Russell

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thad Starner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge