Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ying Shan is active.

Publication


Featured researches published by Ying Shan.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation

Bogdan Matei; Ying Shan; Harpreet S. Sawhney; Yi Tan; Rakesh Kumar; Daniel Huber; Martial Hebert

We propose a new method for rapid 3D object indexing that combines feature-based methods with coarse alignment-based matching techniques. Our approach achieves a sublinear complexity on the number of models, maintaining at the same time a high degree of performance for real 3D sensed data that is acquired in largely uncontrolled settings. The key component of our method is to first index surface descriptors computed at salient locations from the scene into the whole model database using the locality sensitive hashing (LSH), a probabilistic approximate nearest neighbor method. Progressively complex geometric constraints are subsequently enforced to further prune the initial candidates and eliminate false correspondences due to inaccuracies in the surface descriptors and the errors of the LSH algorithm. The indexed models are selected based on the MAP rule using posterior probability of the models estimated in the joint 3D-signature space. Experiments with real 3D data employing a large database of vehicles, most of them very similar in shape, containing 1,000,000 features from more than 365 models demonstrate a high degree of performance in the presence of occlusion and obscuration, unmodeled vehicle interiors and part articulations, with an average processing time between 50 and 100 seconds per query


international conference on computer vision | 2005

Vehicle identification between non-overlapping cameras without direct feature matching

Ying Shan; Harprett Singh Sawhney; Rakesh Kumar

We propose a novel method for identifying road vehicles between two nonoverlapping cameras. The problem is formulated as a same-different classification problem: probability of two vehicle images from two distinct cameras being from the same vehicle or from different vehicles. The key idea is to compute the probability without matching the two vehicle images directly, which is a process vulnerable to drastic appearance and aspect changes. We represent each vehicle image as an embedding amongst representative exemplars of vehicles within the same camera. The embedding is computed as a vector each of whose components is a nonmetric distance for a vehicle to an exemplar. The nonmetric distances are computed using robust matching of oriented edge images. A set of truthed training examples of same-different vehicle pairings across the two cameras is used to learn a classifier that encodes the probability distributions. A pair of the embeddings representing two vehicles across two cameras is then used to compute the same-different probability. In order for the vehicle exemplars to be representative for both cameras, we also propose a method for jointly selection of corresponding exemplars using the training data. Experiments on observations of over 400 vehicles under drastically illumination and camera conditions demonstrate promising results.


computer vision and pattern recognition | 2005

Unsupervised learning of discriminative edge measures for vehicle matching between non-overlapping cameras

Ying Shan; Harpreet S. Sawhney; Rakesh Kumar

This paper proposes a method for matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem: probability of two observations from two distinct cameras being from the same vehicle or from different vehicles. We employ a measurement vector consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each match measure in the final decision is determined by a unsupervised learning process so that the same-different classification can be optimally separated in the combined measurement space. The robustness of the match measures and the use of discriminant analysis in the classification ensure that the proposed method performs better than existing edge-based approaches, especially in the presence of missing/false edges caused by shadows and different illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Shapeme histogram projection and matching for partial object recognition

Ying Shan; Harpreet S. Sawhney; Bogdan Matei; Rakesh Kumar

Histograms of shape signature or prototypical shapes, called shapemes, have been used effectively in previous work for 2D/3D shape matching and recognition. We extend the idea of shapeme histogram to recognize partially observed query objects from a database of complete model objects. We propose representing each model object as a collection of shapeme histograms and match the query histogram to this representation in two steps: 1) compute a constrained projection of the query histogram onto the subspace spanned by all the shapeme histograms of the model and 2) compute a match measure between the query histogram and the projection. The first step is formulated as a constrained optimization problem that is solved by a sampling algorithm. The second step is formulated under a Bayesian framework, where an implicit feature selection process is conducted to improve the discrimination capability of shapeme histograms. Results of matching partially viewed range objects with a 243 model database demonstrate better performance than the original shapeme histogram matching algorithm and other approaches.


computer vision and pattern recognition | 2004

Linear model hashing and batch RANSAC for rapid and accurate object recognition

Ying Shan; Bogdan Matei; Harpreet S. Sawhney; Rakesh Kumar; Daniel Huber; Martial Hebert

This paper proposes a joint feature-based model indexing and geometric constraint based alignment pipeline for efficient and accurate recognition of 3D objects from a large model database. Traditional approaches either first prune the model database using indexing without geometric alignment or directly perform recognition based alignment. The indexing based pruning methods without geometric constraints can miss the correct models under imperfections such as noise, clutter and obscurations. Alignment based verification methods have to linearly verify each model in the database and hence do not scale up. The proposed techniques use spin images as semi-local shape descriptors and locality-sensitive hashing (LSH) to index into a joint spin image database for all the models. The indexed models represented in the pruned set are further pruned using progressively complex geometric constraints. A simple geometric configuration of multiple spin images, for instance a doublet, is first used to check for geometric consistency. Subsequently, full Euclidean geometric constraints are applied using RANSAC-based techniques on the pruned spin images and the models to verify specific object identity. As a result, the combined indexing and geometric alignment based pipeline is able to focus on matching the most promising models, and generate far less pose hypotheses while maintaining the same level of performance as the sequential alignment based recognition. Furthermore, compared to geometric indexing techniques like geometric hashing, the construction time and storage complexity for the proposed technique remains linear in the number of features rather than higher order polynomial. Experiments on a 56 3D model database show promising results.


computer vision and pattern recognition | 2005

Vehicle fingerprinting for reacquisition & tracking in videos

Yanlin Guo; Steven C. Hsu; Ying Shan; Harpreet S. Sawhney; Rakesh Kumar

Visual recognition of objects through multiple observations is an important component of object tracking. We address the problem of vehicle matching when multiple observations of a vehicle are separated in time such that frames of observations are not contiguous, thus prohibiting the use of standard frame-to-frame data association. We employ features extracted over a sequence during one time interval as a vehicle fingerprint that is used to compute the likelihood that two or more sequence observations are from the same or different vehicles. The challenges of change in pose, aspect and appearances across two disparate observations are handled by combining feature-based quasi-rigid alignment with flexible matching between two or more sequences. The current work uses the domain of vehicle tracking from aerial platforms where typically both the imaging platform and the vehicles are moving and the number of pixels on the object are limited to fairly low resolutions. Extensive evaluation with respect to ground truth is reported in the paper.


computer vision and pattern recognition | 2007

PEET: Prototype Embedding and Embedding Transition for Matching Vehicles over Disparate Viewpoints

Yanlin Guo; Ying Shan; Harpreet S. Sawhney; Rakesh Kumar

This paper presents a novel framework, prototype embedding and embedding transition (PEET), for matching objects, especially vehicles, that undergo drastic pose, appearance, and even modality changes. The problem of matching objects seen under drastic variations is reduced to matching embeddings of object appearances instead of matching the object images directly. An object appearance is first embedded in the space of a representative set of model prototypes (prototype embedding (PE)). Objects captured at disparate temporal and spatial sites are embedded in the space of prototypes that are rendered with the pose of the cameras at the respective sites. Low dimensional embedding vectors are subsequently matched. A significant feature of our approach is that no mapping function is needed to compute the distance between embedding vectors extracted from objects viewed from disparate pose and appearance changes, instead, an embedding transition (ET) scheme is utilized to implicitly realize the complex and non-linear mapping with high accuracy. The heterogeneous nature of matching between high-resolution and low-resolution image objects in PEET is discussed, and an unsupervised learning scheme based on the exploitation of the heterogeneous nature is developed to improve the overall matching performance of mixed resolution objects. The proposed approach has been applied to vehicular object classification and query application, and the extensive experimental results demonstrate the efficacy and versatility of the PEET framework.


computer vision and pattern recognition | 2006

Learning Exemplar-Based Categorization for the Detection of Multi-View Multi-Pose Objects

Ying Shan; Feng Han; Harpreet S. Sawhney; Rakesh Kumar

This paper proposes a novel approach for multi-view multi-pose object detection using discriminative shapebased exemplars. The key idea underlying this method is motivated by numerous previous observations that manually clustering multi-view multi-pose training data into different categories and then combining the separately trained two-class classifiers greatly improved the detection performance. A novel computational framework is proposed to unify different processes of categorization, training individual classifier for each intra-class category, and training a strong classifier combining the individual classifiers. The individual processes employ a single objective function that is optimized using two nested AdaBoost loops. The outer AdaBoost loop is used to select discriminative exemplars and the inner AdaBoost is used to select discriminative features on the selected exemplars. The proposed approach replaces the manual time-consuming process of exemplar selection as well as addresses the problem of labeling ambiguity inherent in this process. Also, our approach fully complies with the standard AdaBoost-based object detection framework in terms of real-time implementation. Experiments on multi-view multi-pose people and vehicle data demonstrate the efficacy of the proposed approach.


european conference on computer vision | 2004

Partial Object Matching with Shapeme Histograms

Ying Shan; Harpreet S. Sawhney; Bogdan Matei; Rakesh Kumar

Histogram of shape signature or prototypical shapes, called shapemes, have been used effectively in previous work for 2D/3D shape matching & recognition. We extend the idea of shapeme histogram to recognize partially observed query objects from a database of complete model objects. We propose to represent each model object as a collection of shapeme histograms, and match the query histogram to this representation in two steps: (i) compute a constrained projection of the query histogram onto the subspace spanned by all the shapeme histograms of the model, and (ii) compute a match measure between the query histogram and the projection. The first step is formulated as a constrained optimization problem that is solved by a sampling algorithm. The second step is formulated under a Bayesian framework where an implicit feature selection process is conducted to improve the discrimination capability of shapeme histograms. Results of matching partially viewed range objects with a 243 model database demonstrate better performance than the original shapeme histogram matching algorithm and other approaches.


International Journal of Pattern Recognition and Artificial Intelligence | 2005

CLUSTERING MULTIPLE IMAGE SEQUENCES WITH A SEQUENCE-TO-SEQUENCE SIMILARITY MEASURE

Ying Shan; Harpreet S. Sawhney; Art Pope

We propose a novel similarity measure of two image sequences based on shapeme histograms. The idea of shapeme histogram has been used for single image/texture recognition, but is used here to solve...

Collaboration


Dive into the Ying Shan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Huber

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge