Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Brian P. Clarkson is active.

Publication


Featured researches published by Brian P. Clarkson.


international conference on acoustics speech and signal processing | 1999

Unsupervised clustering of ambulatory audio and video

Brian P. Clarkson; Alex Pentland

A truly personal and reactive computer system should have access to the same information as its user, including the ambient sights and sounds. To this end, we have developed a system for extracting events and scenes from natural audio/visual input. We find our system can (without any prior labeling of data) cluster the audio/visual data into events, such as passing through doors and crossing the street. Also, we hierarchically cluster these events into scenes and get clusters that correlate with visiting the supermarket, or walking down a busy street.


international symposium on wearable computers | 2000

Recognizing user context via wearable sensors

Brian P. Clarkson; Kenji Mase; Alex Pentland

We describe experiments in recognizing a persons situation from only a wearable camera and microphone. The types of situations considered in these experiments are coarse locations (such as at work, in a subway or in a grocery store) and coarse events (such as in a conversation or walking down a busy street) that would require only global, non-attentional features to distinguish them.


ieee international conference on automatic face and gesture recognition | 2000

Understanding purposeful human motion

Christopher Richard Wren; Brian P. Clarkson; Alex Pentland

Human motion can be understood on many levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, 3D tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate automatic discovery of features in this new feature space.


international symposium on wearable computers | 1998

Extracting context from environmental audio

Brian P. Clarkson; Alex Pentland

When notifying the user about an appointment, message arrival, or other timely event, the wearable should not blindly sound an alarm (visual or auditory). Integration with the user requires the wearable to be aware of the users situational context. Is the user in a conversation? On the phone or with someone nearby? Who? Is the user driving in his car, walking down the street, or sitting at his desk? We have developed a system that allows us to infer environmental context by audio classification. It was designed using a statistical/pattern recognition framework called Hidden Markov Models (HMM) that allow us to recognize classes of sounds given enough examples from each class.


human factors in computing systems | 2001

The familiar: a living diary and companion

Brian P. Clarkson; Kenji Mase; Alex Pentland

We present a perceptual system, called the Familiar, that could allow a user to collect his/her memories over their lifetime into a continually growing and adapting multimedia diary. The Familiar uses the natural patterns in sensor readings from a camera, microphone, and accelerometers, to find the recurring patterns of similarity and dissimilarity in the users activities and uses this information to intelligently structure the users sensor data and associated memorabilia.


international symposium on neural networks | 2003

Learning communities: connectivity and dynamics of interacting agents

Tanzeem Choudhury; Brian P. Clarkson; Sumit Basu; Alex Pentland

Intelligent agents need to learn how the communication structure evolves within interacting groups and how to influence the groups overall behavior. We are developing methods to automatically and unobtrusively learn the social network structure that arises within a human group based on wearable sensors. Computational models of group interaction dynamics are derived from data gathered using wearable sensors. The questions we are exploring are: Can we tell who influences whom? Can we quantify this amount of influence? How can we modify group interactions to promote better information diffusion? The goal is real-time learning and modification of social network relationships by applying statistical machine learning techniques to data obtained from unobtrusive wearable sensors.


international conference on image processing | 2000

Framing through peripheral perception

Brian P. Clarkson; Alex Pentland

Context is an essential line of information for systems that rely on real world inputs. However, it is frequently ignored because modeling context by definition requires modeling features outside of the chosen domain. We model context by using peripheral perception, which basically means non-attentional features. This naturally and intuitively defines what it means to model context. We give the results of two experiments in the domain of wearable sensors (camera and microphone).


international conference on development and learning | 2002

Learning your life: wearables and familiars

Brian P. Clarkson; Sumit Basu; Nathan Eagle; Tanzeem Choudhury; Alex Pentland

The wearable platform is an important perspective from which to collect developmental data. This gives a developmental agent the ability to experience many rich aspects of human behavior, locomotion, interaction, and social structure without knowing how to actively participate in these activities. In the I Sensed project, we combine natural sensor modalities (camera, microphone, gyros) in a wearable framework to build a first prototype of such an agent. We have also taken the next step to build robust statistical models with a massive data collection experiment: 100 days of full surround video, audio, and orientation, amounting to over 500 Gigabytes of data. The first challenge with this data is the discovery and prediction of daily patterns-can we automatically infer the typical paths through someones day and their daily activities, predicting what they will do next and detecting anomalies. Armed with this kind of omnivideo sensor data, we can also apply our learning work tools to conversational scene analysis to help the agent develop a rudimentary understanding of social interactions. This work is aimed towards understanding the structure of face-to-face conversations.


AVBPA | 1998

Multimodal person recognition using unconstrained audio and video

Tanzeem Choudhury; Brian P. Clarkson; Tony Jebara; Alex Pentland Perceptual


Archive | 1998

Auditory Context Awareness via Wearable Computing

Brian P. Clarkson; Nitin Nick Sawhney; Alex Pentland

Collaboration


Dive into the Brian P. Clarkson's collaboration.

Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradley J. Rhodes

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Christopher Richard Wren

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nitin Nick Sawhney

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge