Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Azarbayejani is active.

Publication


Featured researches published by Ali Azarbayejani.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997

Pfinder: real-time tracking of the human body

Christopher Richard Wren; Ali Azarbayejani; Trevor Darrell; Alex Pentland

Pfinder is a real-time system for tracking people and interpreting their behavior. It runs at 10 Hz on a standard SGI Indy computer, and has performed reliably on thousands of people in many different physical locations. The system uses a multiclass statistical model of color and shape to obtain a 2D representation of head and hands in a wide range of viewing conditions. Pfinder has been successfully used in a wide range of applications including wireless interfaces, video databases, and low-bandwidth coding.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1995

Recursive estimation of motion, structure, and focal length

Ali Azarbayejani; Alex Pentland

Presents a formulation for recursive recovery of motion, pointwise structure, and focal length from feature correspondences tracked through an image sequence. In addition to adding focal length to the state vector, several representational improvements are made over earlier structure from motion formulations, yielding a stable and accurate estimation framework which applies uniformly to both true perspective and orthographic projection. Results on synthetic and real imagery illustrate the performance of the estimator. >


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

Visually controlled graphics

Ali Azarbayejani; Thad Starner; Bradley Horowitz; Alex Pentland

Interactive graphics systems that are driven by visual input are discussed. The underlying computer vision techniques and a theoretical formulation that addresses issues of accuracy, computational efficiency, and compensation for display latency are presented. Experimental results quantitatively compare the accuracy of the visual technique with traditional sensing. An extension to the basic technique to include structure recovery is discussed. >


IEEE Signal Processing Magazine | 1999

3D structure from 2D motion

Tony Jebara; Ali Azarbayejani; Alex Pentland

This article motivates the structure from motion (SfM) approaches by describing some current practical applications. This is followed by a brief discussion of the background of the field. Then, several techniques are outlined that show various important approaches and paradigms to the SfM problem. Critical issues, advantages and disadvantages are pointed out. Subsequently, we present our SfM approach for recursive estimation of motion, structure, and camera geometry in a nonlinear dynamic system framework. Results are given for synthetic and real images. These are used to assess the accuracy and stability of the technique. We then discuss some practical and real-time applications we have encountered and the reliability and flexibility of the approach in those settings. Finally, we conclude with results from an independent evaluation study conducted by industry where the proposed SfM algorithm compared favorably to alternative approaches.


international conference on automatic face and gesture recognition | 1996

Invariant features for 3-D gesture recognition

Lee W. Campbell; David A. Becker; Ali Azarbayejani; Aaron F. Bobick; Alex Pentland

Ten different feature vectors are tested in a gesture recognition task which utilizes 3D data gathered in real-time from stereo video cameras, and HMMs for learning and recognition of gestures. Results indicate velocity features are superior to positional features, and partial rotational invariance is sufficient for good performance.


international conference on pattern recognition | 1996

Real-time self-calibrating stereo person tracking using 3-D shape estimation from blob features

Ali Azarbayejani; Alex Pentland

We describe a method for estimation of 3D geometry from 2D blob features. Blob features are clusters of similar pixels in the image plane and can arise from similarity of color, texture, motion and other signal-based metrics. The motivation for considering such features comes from recent successes in real-time extraction and tracking of such blob features in complex cluttered scenes in which traditional feature finders fail, e.g. scenes containing moving people. We use nonlinear modeling and a combination of iterative and recursive estimation methods to recover 3D geometry from blob correspondences across multiple images. The 3D geometry includes the 3D shapes, translations, and orientations of blobs and the relative orientation of the cameras. Using this technique, we have developed a real-time wide-baseline stereo person tracking system which can self-calibrate itself from watching a moving person and can subsequently track peoples head and hands with RIMS errors of 1-2 cm in translation and 2 degrees in rotation. The blob formulation is efficient and reliable, running at 20-30 Hz on a pair of SGI Indy R4400 workstations with no special hardware.


computer vision and pattern recognition | 1993

Recursive estimation of structure and motion using relative orientation constraints

Ali Azarbayejani; Bradley Horowitz; Alex Pentland

A recursive estimation technique for recovering the 3-D motion and pointwise structure of an object is presented. It is based on the use of relative orientation constraints in a local coordinate frame. By carefully formulating the problem to propagate all constraints and to use the minimal number of parameters, an estimator is obtained which is remarkably accurate, stable, and fast-conveying. Numerous experiments using both real and synthetic data demonstrate structure recovery with a typical error of 1.5% and typical motion recovery errors of 1% in translation and 2/spl deg/ in rotation.<<ETX>>


Applied Artificial Intelligence | 1997

Perceptive spaces for performance and entertainment untethered interaction using computer vision and audition

Christopher Richard Wren; Flavia Sparacino; Ali Azarbayejani; Trevor Darrell; Thad Starner; Akira Kotani; Chloe M. Chao; Michal Hlavac; Kenneth B. Russell; Alex Pentland

Bulky head-mounted displays, data gloves, and severely limited movement have become synonymous with virtual environments. This is unfortunate, since virtual environments have such great potential in applications such as entertainment, animation by example, design interface, information browsing, and even expressive performance. In this article, we describe an approach to unencumbered natural interfaces called Perceptive Spaces. The spaces are unencumbered because they utilize passive sensors that do not require special clothing and large format displays that do not isolate the users from their environment. The spaces are natural because the open environment facilitates active participation. Several applications illustrate the expressive power of this approach, as well as the challenges associated with designing these interfaces.


Proceedings of 1994 IEEE 2nd CAD-Based Vision Workshop | 1994

Recursive estimation for CAD model recovery

Ali Azarbayejani; Tinsley A. Galyean; Bradley Horowitz; Alex Pentland

We describe a system for semiautomatically extracting 3-D object models from raw, uncalibrated video. The system utilizes a recursive estimator to accurately recover camera motion, point-wise structure, and camera focal length. Recovered 3-D points are used to compute a piecewise-smooth surface model for the object. Recovered motion and camera geometry are then used along with the original video to texture map the surfaces. We describe extensions to our previously-reported geometry estimation formulation that incorporate focal length estimation and other improvements, so that accurate estimates of structure and camera motion can be recovered from uncalibrated video cameras. We also discuss the buildup of texture maps from sequences of images, which is important in producing realistic looking models. Examples demonstrate generation of a realistic 3-D texture mapped model from a video sequence, the post-production manipulation of video, and the combination of computer graphics models with video.<<ETX>>


international symposium on 3d data processing visualization and transmission | 2002

Browsing 3-D spaces with 3-D vision: body-driven navigation through the internet city

Flavia Sparacino; Christopher R. Wren; Ali Azarbayejani; Alex Pentland

This paper presents a computer vision stereo based interface to navigate inside a 3-D Internet city, using body gestures. A wide-baseline stereo pair of cameras is used to obtain 3-D body models of the user’s hands and head in a small desk-area environment. The interface feeds this information to an HMM gesture classifier to reliably recognize the user’s browsing commands. To illustrate the features of this interface we describe its application to our 3-D Internet browser which facilitates the recollection of information by organizing and embedding it inside a virtual city through which the user navigates.

Collaboration


Dive into the Ali Azarbayejani's collaboration.

Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Trevor Darrell

University of California

View shared research outputs
Top Co-Authors

Avatar

Christopher Richard Wren

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thad Starner

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Bradley Horowitz

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Flavia Sparacino

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron F. Bobick

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christopher R. Wren

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Harold L. Alexander

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge