Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter Henry Tu is active.

Publication


Featured researches published by Peter Henry Tu.


international conference on computer vision | 2007

Shape and Appearance Context Modeling

Xiaogang Wang; Gianfranco Doretto; Thomas B. Sebastian; Jens Rittscher; Peter Henry Tu

In this work we develop appearance models for computing the similarity between image regions containing deformable objects of a given class in realtime. We introduce the concept of shape and appearance context. The main idea is to model the spatial distribution of the appearance relative to each of the object parts. Estimating the model entails computing occurrence matrices. We introduce a generalization of the integral image and integral histogram frameworks, and prove that it can be used to dramatically speed up occurrence computation. We demonstrate the ability of this framework to recognize an individual walking across a network of cameras. Finally, we show that the proposed approach outperforms several other methods.


ambient intelligence | 2011

Appearance-based person reidentification in camera networks: problem overview and current approaches

Gianfranco Doretto; Thomas B. Sebastian; Peter Henry Tu; Jens Rittscher

Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occlusions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individual is of fundamental importance to the video analysis in large-scale network of cameras. This is the person reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effectively address the challenges associated with changes in illumination, pose, and clothing appearance variation are discussed. More specifically, the development of a set of models that capture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some others require an intermediate step where specific body parts need to be identified. Some are designed to extract appearance features over time, and some others can operate reliably also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrence matrices by leveraging a generalization of the integral representation of images. The algorithms are deployed and tested in a camera network comprising of three cameras with non-overlapping field of views, where a multi-camera multi-target tracker links the tracks in different cameras by reidentifying the same people appearing in different views.


computer vision and pattern recognition | 2005

Simultaneous estimation of segmentation and shape

Jens Rittscher; Peter Henry Tu; Nils Krahnstoever

The main focus of this work is the integration of feature grouping and model based segmentation into one consistent framework. The algorithm is based on partitioning a given set of image features using a likelihood function that is parameterized on the shape and location of potential individuals in the scene. Using a variant of the EM formulation, maximum likelihood estimates of both the model parameters and the grouping are obtained simultaneously. The resulting algorithm performs global optimization and generates accurate results even when decisions can not be made using local context alone. An important feature of the algorithm is that the number of people in the scene is not modeled explicitly. As a result no prior knowledge or assumed distributions are required. The approach is shown to be robust with respect to partial occlusion, shadows, clutter, and can operate over a large range of challenging view angles including those that are parallel to the ground plane. Comparisons with existing crowd segmentation systems are made and the utility of coupling crowd segmentation with a temporal tracking system is demonstrated.


advanced video and signal based surveillance | 2005

Detecting and counting people in surveillance applications

Xiaoming Liu; Peter Henry Tu; Jens Rittscher; A. G. Amitha Perera; Nils Krahnstoever

A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.


international conference on biometrics theory applications and systems | 2008

Stand-off Iris Recognition System

Frederick Wilson Wheeler; A. G. Amitha Perera; Gil Abramovich; Bing Yu; Peter Henry Tu

The iris is a highly accurate biometric identifier. However widespread adoption is hindered by the difficulty of capturing high-quality iris images with minimal user co-operation. This paper describes a first-generation prototype iris identification system designed for stand-off cooperative access control. This system identifies individuals who stand in front of and face the system after 3.2 seconds on average. Subjects within a capture zone are imaged with a calibrated pair of wide-field-of-view surveillance cameras. A subject is located in three dimensions using face detection and triangulation. A zoomed near infrared iris camera on a pan-tilt platform is then targeted to the subject. The iris camera lens has its focal distance automatically adjusted based on the subject distance. Integrated with the iris camera on the pan-tilt platform is a near infrared illuminator that is composed of an array of directed LEDs. Video frames from the iris camera are processed to detect and segment the iris, generate a template and then identify the subject.


international conference on biometrics theory applications and systems | 2010

Face recognition at a distance system for surveillance applications

Frederick Wilson Wheeler; Richard L. Weiss; Peter Henry Tu

Face recognition at a distance is concerned with the automatic recognition of non-cooperative subjects over a wide area. This remote biométrie collection and identification problem can be addressed with an active vision system where people are detected and tracked with wide-field-of-view cameras and near-field-of-view pan-tilt-zoom cameras are automatically controlled to collect high-resolution facial images. We have developed a prototype active-vision face recognition at a distance system that we call the Biometrie Surveillance System. In this paper we review related prior work, describe the design and operation of this system, and provide experimental performance results. The system features predictive subject targeting and an adaptive target selection mechanism based on the current actions and history of each tracked subject to help ensure that facial images are captured for all subjects in view. Experimental tests designed to simulate operation in large transportation hubs show that the system can track subjects and capture facial images at distances of 25–50 m and can recognize them using a commercial face recognition system at a distance of 15–20 m.


european conference on computer vision | 2008

Unified Crowd Segmentation

Peter Henry Tu; Thomas B. Sebastian; Gianfranco Doretto; Nils Krahnstoever; Jens Rittscher; Ting Yu

This paper presents a unified approach to crowd segmentation. A global solution is generated using an Expectation Maximization framework. Initially, a head and shoulder detector is used to nominate an exhaustive set of person locations and these form the person hypotheses. The image is then partitioned into a grid of small patches which are each assigned to one of the person hypotheses. A key idea of this paper is that while whole body monolithic person detectors can fail due to occlusion, a partial response to such a detector can be used to evaluate the likelihood of a single patch being assigned to a hypothesis. This captures local appearance information without having to learn specific appearance models. The likelihood of a pair of patches being assigned to a person hypothesis is evaluated based on low level image features such as uniform motion fields and color constancy. During the E-step, the single and pairwise likelihoods are used to compute a globally optimal set of assignments of patches to hypotheses. In the M-step, parameters which enforce global consistency of assignments are estimated. This can be viewed as a form of occlusion reasoning. The final assignment of patches to hypotheses constitutes a segmentation of the crowd. The resulting system provides a global solution that does not require background modeling and is robust with respect to clutter and partial occlusion.


IEEE Signal Processing Magazine | 2013

Video surveillance: past, present, and now the future [DSP Forum]

Fatih Porikli; Francois Bremond; Shiloh L. Dockstader; James M. Ferryman; Anthony Hoogs; Brian C Lovell; Sharath Pankanti; Bernhard Rinner; Peter Henry Tu; Péter L. Venetianer

Video surveillance is a part of our daily life, even though we may not necessarily realize it. We might be monitored on the street, on highways, at ATMs, in public transportation vehicles, inside private and public buildings, in the elevators, in front of our television screens, next to our baby?s cribs, and any spot one can set a camera.


international conference on biometrics theory applications and systems | 2007

Multi-Frame Super-Resolution for Face Recognition

Frederick Wilson Wheeler; Xiaoming Liu; Peter Henry Tu

Face recognition at a distance is a challenging and important law-enforcement surveillance problem, with low image resolution and blur contributing to the difficulties. We present a method for combining a sequence of video frames of a subject in order to create a super-resolved image of the face with increased resolution and reduced blur. An Active Appearance Model (AAM) of face shape and appearance is fit to the face in each video frame. The AAM fit provides the registration used by a robust image super-resolution algorithm that iteratively solves for a higher resolution face image from a set of video frames. This process is tested with real-world outdoor video using a PTZ camera and a commercial face recognition engine. Both improved visual perception and automatic face recognition performance are observed in these experiments.


Pattern Recognition Letters | 2013

Learning person-specific models for facial expression and action unit recognition

Jixu Chen; Xiaoming Liu; Peter Henry Tu; Amy Victoria Aragones

A key assumption of traditional machine learning approach is that the test data are draw from the same distribution as the training data. However, this assumption does not hold in many real-world scenarios. For example, in facial expression recognition, the appearance of an expression may vary significantly for different people. As a result, previous work has shown that learning from adequate person-specific data can improve the expression recognition performance over the one from generic data. However, person-specific data is typically very sparse in real-world applications due to the difficulties of data collection and labeling, and learning from sparse data may suffer from serious over-fitting. In this paper, we propose to learn a person-specific model through transfer learning. By transferring the informative knowledge from other people, it allows us to learn an accurate model for a new subject with only a small amount of person-specific data. We conduct extensive experiments to compare different person-specific models for facial expression and action unit (AU) recognition, and show that transfer learning significantly improves the recognition performance with a small amount of training data.

Collaboration


Dive into the Peter Henry Tu's collaboration.

Top Co-Authors

Avatar

Xiaoming Liu

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge