Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Pentland is active.

Publication


Featured researches published by Alex Pentland.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Pfinder: real-time tracking of the human body

Christopher Richard Wren; Ali Azarbayejani; Trevor Darrell; Alex Pentland

Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding.


International Journal of Computer Vision | 1996

Photobook: content-based manipulation of image databases

Alex Pentland; Rosalind W. Picard; Stan Sclaroff

We describe the Photobook system, which is a set of interactive tools for browsing and searching images and image sequences. These query tools differ from those used in standard image databases in that they make direct use of the image content rather than relying on text annotations. Direct search on image content is made possible by use of semantics-preserving image compression, which reduces images to a small set of perceptually-significant coefficients. We discuss three types of Photobook descriptions in detail: one that allows search based on appearance, one that uses 2-D shape, and a third that allows search based on textural properties. These image content descriptions can be combined with each other and with text-based descriptions to provide a sophisticated browsing and search capability. In this paper we demonstrate Photobook on databases containing images of people, video keyframes, hand tools, fish, texture swatches, and 3-D medical data.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Probabilistic visual learning for object representation

Baback Moghaddam; Alex Pentland

We present an unsupervised technique for visual learning, which is based on density estimation in high-dimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a mixture-of-Gaussians model (for multimodal distributions). Those probability densities are then used to formulate a maximum-likelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects, such as hands.


Proceedings of the National Academy of Sciences of the United States of America | 2009

Inferring friendship network structure by using mobile phone data

Nathan Eagle; Alex Pentland; David Lazer

Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.


Science | 2009

Computational Social Science

David Lazer; Alex Pentland; Lada A. Adamic; Sinan Aral; Albert-László Barabási; Devon Brewer; Nicholas A. Christakis; Noshir Contractor; James H. Fowler; Myron P. Gutmann; Tony Jebara; Gary King; Michael W. Macy; Deb Roy; Marshall W. Van Alstyne

A field is emerging that leverages the capacity to collect and analyze data at a scale that may reveal patterns of individual and group behaviors.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Real-time American sign language recognition using desk and wearable computer based video

Thad Starner; Joshua Weaver; Alex Pentland

We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the users unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.


computer vision and pattern recognition | 1997

Coupled hidden Markov models for complex action recognition

Matthew Brand; Nuria Oliver; Alex Pentland

We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and classifying dynamic behaviors, popular because they offer dynamic time warping, a training algorithm and a clear Bayesian semantics. However the Markovian framework makes strong restrictive assumptions about the system generating the signal-that it is a single process having a small number of states and an extremely limited state memory. The single-process model is often inappropriate for vision (and speech) applications, resulting in low ceilings on model performance. Coupled HMMs provide an efficient way to resolve many of these problems, and offer superior training speeds, model likelihoods, and robustness to initial conditions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1987

A New Sense for Depth of Field

Alex Pentland

This paper examines a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems. Previously, autofocus schemes have used depth of field to measured depth by searching for the lens setting that gives the best focus, repeating this search separately for each image point. This search is unnecessary, for there is a smooth gradient of focus as a function of depth. By measuring the amount of defocus, therefore, we can estimate depth simultaneously at all points, using only one or two images. It is proved that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation. Experiments with realistic imagery show that measurement of these optical gradients can provide depth information roughly comparable to stereo disparity or motion parallax, while avoiding image-to-image matching problems.


Science | 2010

Evidence for a Collective Intelligence Factor in the Performance of Human Groups

Anita Williams Woolley; Christopher F. Chabris; Alex Pentland; Nada Hashmi; Thomas W. Malone

Meeting of Minds The performance of humans across a range of different kinds of cognitive tasks has been encapsulated as a common statistical factor called g or general intelligence factor. What intelligence actually is, is unclear and hotly debated, yet there is a reproducible association of g with performance outcomes, such as income and academic achievement. Woolley et al. (p. 686, published online 30 September) report a psychometric methodology for quantifying a factor termed “collective intelligence” (c), which reflects how well groups perform on a similarly diverse set of group problem-solving tasks. The primary contributors to c appear to be the g factors of the group members, along with a propensity toward social sensitivity—in essence, how well individuals work with others. A metric for group performance on a battery of cognitive tasks yields a group intelligence quantity: collective intelligence. Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Coding, analysis, interpretation, and recognition of facial expressions

Irfan A. Essa; Alex Pentland

We describe a computer vision system for observing facial motion by using an optimal estimation optical flow method coupled with geometric, physical and motion-based dynamic models describing the facial structure. Our method produces a reliable parametric representation of the faces independent muscle action groups, as well as an accurate estimate of facial motion. Previous efforts at analysis of facial expression have been based on the facial action coding system (FACS), a representation developed in order to allow human psychologists to code expression from static pictures. To avoid use of this heuristic coding scheme, we have used our computer vision system to probabilistically characterize facial motion and muscle activation in an experimental population, thus deriving a new, more accurate, representation of human facial expressions that we call FACS+. Finally, we show how this method can be used for coding, analysis, interpretation, and recognition of facial expressions.

Collaboration


Dive into the Alex Pentland's collaboration.

Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Bruno Lepri

fondazione bruno kessler

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yaniv Altshuler

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thad Starner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Azarbayejani

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel Olguin Olguin

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge