Binglong Xie
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Binglong Xie.
Image and Vision Computing | 2004
Binglong Xie; Visvanathan Ramesh; Terrance E. Boult
Abstract Effective change detection under dynamic illumination conditions is an active research topic. Most research has concentrated on adaptive statistical representations for the appearance of the background scene. There is limited work that develops the statistical models for background representation by taking into account an explicit model for the camera response function, the camera noise model, and illumination priors. Assuming a monotone but non-linear camera response function, a Phong shading model for the surface material, and a locally constant but spatially varying illumination, we show that the sign of the difference between two pixel measurements is maintained across global illumination changes. We use this result along with a statistical model for the camera noise to develop a change detection algorithm that deals with sudden changes in illumination. The performance evaluation of the algorithm is done through simulations and on real data.
joint pattern recognition symposium | 2003
Binglong Xie; Dorin Comaniciu; Visvanathan Ramesh; Markus Simon; Terrance E. Boult
Face detection using components has been proved to produce superior results due to its robustness to occlusions and pose and illumination changes. A first level of processing is devoted to the detection of individual components, while a second level deals with the fusion of the component detectors. However, the fusion methods investigated up to now neglect the uncertainties that characterize the component locations. We show that this uncertainty carries important information that, when exploited, leads to increased face localization accuracy. We discuss and compare possible solutions taking into account geometrical constraints. The efficiency and usefulness of the techniques are tested with both synthetic and real world examples.
computational intelligence | 2006
Binglong Xie; Terry Boult; Visvanathan Ramesh; Ying Zhu
Automatic face recognition has a lot of application areas and current single-camera face recognition has severe limitations when the subject is not cooperative, or there are pose changes and different illumination conditions. A face recognition system using multiple cameras overcomes these limitations. In each channel, real-time component-based face detection detects the face with moderate pose and illumination changes employing fusion of individual component detectors for eyes and mouth, and the normalized face is recognized using an LDA recognizer. A reliability measure is trained using the features extracted from both face detection and recognition processes, to evaluate the inherent quality of channel recognition. The recognition from the most reliable channel is selected as the final recognition results. The recognition rate is far better than that of either single channel, and consistently better than common classifier fusion rules
workshop on applications of computer vision | 2007
Binglong Xie; Visvanathan Ramesh; Ying Zhu; Terrance E. Boult
Single-camera face recognition has severe limitations when the subject is not cooperative, or there are pose changes and different illumination conditions. Face recognition using multiple synchronized cameras is proposed to overcome the limitations. We introduce a reliability measure trained from examples to evaluate the inherent quality of channel recognition. The recognition from the channel predicted to be the most reliable is selected as the final recognition results. In this paper, we enhance Adaboost to improve the component based face detector running in each channel as well as the channel reliability measure training. Effective features are designed to train the channel reliability measure using data from both face detection and recognition. The recognition rate is far better than that of either single channel, and consistently better than common classifier fusion rules
computer vision and pattern recognition | 2004
Xiang Sean Zhou; Dorin Comaniciu; Binglong Xie; R. Cruceanu; Alok Gupta
Uncertainty handling plays an important role during shape tracking. We have recently shown that the fusion of measurement information with system dynamics and shape priors greatly improves the tracking performance for very noisy images such as ultrasound sequences [22]. Nevertheless, this approach required user initialization of the tracking process. This paper solves the automatic initialization problem by performing boosted shape detection as a generic measurement process and integrating it in our tracking framework. We show how to propagate the local detection uncertainties of multiple shape candidates during shape alignment, fusion with the predicted shape prior, and fusion with subspace constraints. As a result, we treat all sources of information in a unified way and derive the posterior shape model as the shape with the maximum likelihood. Our framework is applied for the automatic tracking of endocardium in ultrasound sequences of the human heart. Reliable detection and robust tracking results are achieved when compared to existing approaches and inter-expert variations.
Archive | 2005
Ying Zhu; Binglong Xie; Visvanathan Ramesh; Martin Pellkofer; Thorsten Kohler
Archive | 2004
Dorin Comaniciu; Thorsten Kohler; Binglong Xie; Ying Zhu
Archive | 2004
Dorin Comaniciu; Binglong Xie
Archive | 2002
Binglong Xie; Visvanathan Ramesh; Terrance E. Boult
Archive | 2004
Binglong Xie; Dorin Comaniciu; Visvanathan Ramesh; Markus Simon