Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Baback Moghaddam is active.

Publication


Featured researches published by Baback Moghaddam.


IEEE Transactions on Image Processing | 2004

Visual tracking and recognition using appearance-adaptive models in particle filters

Shaohua Kevin Zhou; Rama Chellappa; Baback Moghaddam

We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations.


ieee international conference on automatic face and gesture recognition | 2002

A unified learning framework for real time face detection and classification

Gregory Shakhnarovich; Paul A. Viola; Baback Moghaddam

This paper presents progress toward an integrated, robust, real-time face detection and demographic analysis system. Faces are detected and extracted using the fast algorithm proposed by P. Viola and M.J. Jones (2001). Detected faces are passed to a demographic (gender and ethnicity) classifier which uses the same architecture as the face detector. This demographic classifier is extremely fast, and delivers error rates slightly better than the best-known classifiers. To counter the unconstrained and noisy sensing environment, demographic information is integrated across time for each individual. Therefore, the final demographic classification combines estimates from many facial detections in order to reduce the error rate. The entire system processes 10 frames per second on an 800-MHz Intel Pentium III.


Handbook of Face Recognition | 2011

Face Recognition in Subspaces

Gregory Shakhnarovich; Baback Moghaddam

Images of faces, represented as high-dimensional pixel arrays, often belong to a manifold of intrinsically low dimension. Face recognition, and computer vision research in general, has witnessed a growing interest in techniques that capitalize on this observation and apply algebraic and statistical tools for extraction and analysis of the underlying manifold. In this chapter, we describe in roughly chronologic order techniques that identify, parameterize, and analyze linear and nonlinear subspaces, from the original Eigenfaces technique to the recently introduced Bayesian method for probabilistic similarity analysis. We also discuss comparative experimental evaluation of some of these techniques as well as practical issues related to the application of subspace methods for varying pose, illumination, and expression.


international conference on machine learning | 2006

Generalized spectral bounds for sparse LDA

Baback Moghaddam; Yair Weiss; Shai Avidan

We present a discrete spectral framework for the sparse or cardinality-constrained solution of a generalized Rayleigh quotient. This NP-hard combinatorial optimization problem is central to supervised learning tasks such as sparse LDA, feature selection and relevance ranking for classification. We derive a new generalized form of the Inclusion Principle for variational eigenvalue bounds, leading to exact and optimal sparse linear discriminants using branch-and-bound search. An efficient greedy (approximate) technique is also presented. The generalization performance of our sparse LDA algorithms is demonstrated with real-world UCI ML benchmarks and compared to a leading SVM-based gene selection algorithm for cancer classification.


International Journal of Computer Vision | 2004

Visualization and User-Modeling for Browsing Personal Photo Libraries

Baback Moghaddam; Qi Tian; Chia Shen; Thomas S. Huang

We present a user-centric system for visualization and layout for content-based image retrieval. Image features (visual and/or semantic) are used to display retrievals as thumbnails in a 2-D spatial layout or “configuration” which conveys all pair-wise mutual similarities. A graphical optimization technique is used to provide maximally uncluttered and informative layouts. Moreover, a novel subspace feature weighting technique can be used to modify 2-D layouts in a variety of context-dependent ways. An efficient computational technique for subspace weighting and re-estimation leads to a simple user-modeling framework whereby the system can learn to display query results based on layout examples (or relevance feedback) provided by the user. The resulting retrieval, browsing and visualization can adapt to the users (time-varying) notions of content, context and preferences in style and interactive navigation. Monte Carlo simulations with machine-generated layouts as well as pilot user studies have demonstrated the ability of this framework to model or “mimic” users, by automatically generating layouts according to their preferences.


human factors in computing systems | 2001

Personal digital historian: user interface design

Chia Shen; Baback Moghaddam; Paul A. Beardsley; Ryan Scott Bardsley

Desktop computers are not designed for multi-person face-to-face conversation in a social setting. We describe the design of a novel user interface for multi-user interactive informal storytelling. Our design is guided by principles of experience sharing, the disappearing computer, visual navigation, and implicit query formulation.


international conference on multimedia and expo | 2003

Adaptive visual tracking and recognition using particle filters

Shaohua Kevin Zhou; Rama Chellappa; Baback Moghaddam

This paper presents an improved method for simultaneous tracking and recognition of human faces from video, where a time series model is used to resolve the uncertainties in tracking and recognition. The improvements mainly arise from three aspects: (i) modeling the inter-frame appearance changes within the video sequence using an adaptive appearance model and an adaptive-velocity motion model; (ii) modeling the appearance changes between the video frames and gallery images by constructing intra- and extra-personal spaces; and (iii) utilization of the fact that the gallery images are in frontal views. By embedding them in a particle filter, we are able to achieve a stabilized tracker and an accurate recognizer when confronted by pose and illumination variations.


eurographics symposium on rendering techniques | 2005

Estimation of 3D faces and illumination from single photographs using a bilinear illumination model

Jinho Lee; Raghu Machiraju; Baback Moghaddam; Hanspeter Pfister

3D Face modeling is still one of the biggest challenges in computer graphics. In this paper we present a novel framework that acquires the 3D shape, texture, pose and illumination of a face from a single photograph. Additionally, we show how we can recreate a face under varying illumination conditions. Or, essentially relight it. Using a custom-built face scanning system, we have collected 3D face scans and light reflection images of a large and diverse group of human subjects. We derive a morphable face model for 3D face shapes and accompanying textures by transforming the data into a linear vector sub-space. The acquired images of faces under variable illumination are then used to derive a bilinear illumination model that spans 3D face shape and illumination variations. Using both models we, in turn, propose a novel fitting framework that estimates the parameters of the morphable model given a single photograph. Our framework can deal with complex face reflectance and lighting environments in an efficient and robust manner. In the results section of our paper, we compare our methods to existing ones and demonstrate its efficacy in reconstructing 3D face models when provided with a single photograph. We also provide several examples of facial relighting (on 2D images) by performing adequate decomposition of the estimated illumination using our framework.


NATO ASI series. Series F : computer and system sciences | 1998

Beyond Linear Eigenspaces: Bayesian Matching for Face Recognition

Baback Moghaddam; Alex Pentland

We propose a novel technique for direct visual matching of images for the purposes of face recognition and database search. Specifically, we argue in favor of a probabilistic measure of similarity, in contrast to simpler methods which are based on standard Euclidean L 2 norms (e.g., template matching) or subspace-restricted norms (e.g., eigenspace matching). The proposed similarity measure is based on a Bayesian analysis of image differences: we model two mutually exclusive classes of variation between two facial images: intra-personal (variations in appearance of the same individual, due to different expressions or lighting) and extra-personal (variations in appearance due to a difference in identity). The high-dimensional probability density functions for each respective class are then obtained from training data using an eigenspace density estimation technique and subsequently used to compute a similarity measure based on the a posteriori probability of membership in the intra-personal class, which is used to rank matches in the database. The performance advantage of this probabilistic matching technique over standard Euclidean nearest-neighbor eigenspace matching is demonstrated using results from ARPA’s 1996 “FERET” face recognition competition, in which this algorithm was found to be the top performer.


analysis and modeling of faces and gestures | 2005

A practical face relighting method for directional lighting normalization

Kuang-Chih Lee; Baback Moghaddam

We propose a simplified and practical computational technique for estimating directional lighting in uncalibrated images of faces in frontal pose. We show that this inverse problem can be solved using constrained least-squares and class-specific priors on shape and reflectance. For simplicity, the principal illuminant is modeled as a mixture of Lambertian and ambient components. By using a generic 3D face shape and an average 2D albedo we can efficiently compute the directional lighting with surprising accuracy (in real-time and with or without shadows). We then use our lighting direction estimate in a forward rendering step to “relight” arbitrarily-lit input faces to a canonical (diffuse) form as needed for illumination-invariant face verification. Experimental results with the Yale Face Database B as well as real access-control datasets illustrate the advantages over existing pre-processing techniques such as a linear ramp (facet) model commonly used for lighting normalization.

Collaboration


Dive into the Baback Moghaddam's collaboration.

Top Co-Authors

Avatar

Jinho Lee

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qi Tian

University of Texas at San Antonio

View shared research outputs
Top Co-Authors

Avatar

Gregory Shakhnarovich

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Paul A. Beardsley

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Yair Weiss

Mitsubishi Electric Research Laboratories

View shared research outputs
Researchain Logo
Decentralizing Knowledge