Ting Shan
University of Queensland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ting Shan.
international conference on pattern recognition | 2006
Ting Shan; Brian C. Lovell; Shaokang Chen
Most face recognition systems only work well under quite constrained environments. In particular, the illumination conditions, facial expressions and head pose must be tightly controlled for good recognition performance. In 2004, we proposed a new face recognition algorithm, adaptive principal component analysis (APCA) (Blanz and Vetter, 1999), which performs well against both lighting variation and expression change. But like other eigenface-derived face recognition algorithms, APCA only performs well with frontal face images. The work presented in this paper is an extension of our previous work to also accommodate variations in head pose. Following the approach of Cootes et al., we develop a face model and a rotation model which can be used to interpret facial features and synthesize realistic frontal face images when given a single novel face image. We use a Viola-Jones based face detector to detect the face in real-time and thus solve the initialization problem for our active appearance model search. Experiments show that our approach can achieve good recognition rates on face images across a wide range of head poses. Indeed recognition rates are improved by up to a factor of 5 compared to standard PCA
advanced video and signal based surveillance | 2007
Ting Shan; Shaokang Chen; Conrad Sanderson; Brian C. Lovell
In recent years, the use of Intelligent Closed-Circuit Television (ICCTV) for crime prevention and detection has attracted significant attention. Existing face recognition systems require passport-quality photos to achieve good performance. However, use of CCTV images is much more problematic due to large variations in illumination, facial expressions and pose angle. In this paper we propose a pose variability compensation technique, which synthesizes realistic frontal face images from non-frontal views. It is based on modelling the face via Active Appearance Models and detecting the pose through a correlation model. The proposed technique is coupled with adaptive principal component analysis (APCA), which was previously shown to perform well in the presence of both lighting and expression variations. Experiments on the FERET dataset show up to 6 fold performance improvements. Finally, in addition to implementation and scalability challenges, we discuss issues related to on-going real life trials in public spaces using existing surveillance hardware.
digital image computing: techniques and applications | 2007
Yasir Mohd Mustafah; Ting Shan; Amelia Wong Azman; Abbas Bigdeli; Brian C. Lovell
Smart Cameras are becoming more popular in Intelligent Surveillance Systems area. Recognizing faces in a crowd in real-time is a key features which would significantly enhance Intelligent Surveillance Systems. Using a high resolution smart camera as a tool to extract faces that are suitable for face recognition would greatly reduce the computational load on the main processing unit. This processing unit would not be overloaded by the demands of the high data rates required for high resolution video and could be designed solely for face recognition. In this paper we report on a multiple-stage face detection and tracking system that is designed for implementation on the NICTA high resolution (5 MP) smart camera.
International Journal of Pattern Recognition and Artificial Intelligence | 2009
Shaokang Chen; Brian C. Lovell; Ting Shan
Recognizing faces with uncontrolled pose, illumination, and expression is a challenging task due to the fact that features insensitive to one variation may be highly sensitive to the other variatio...
digital image computing: techniques and applications | 2005
Shaokang Chen; Brian C. Lovell; Ting Shan
Face recognition is a very complex classification problem and most existing methods are classified into two categories: generative classifiers and discriminative classifiers. Generative classifiers are optimized for description and representation which is not optimal for classification. Discriminative classifiers may achieve less asymptotic errors but are inefficient to train and may overfit to training data. In this paper, we present a hybrid learning algorithm that combines both generative learning and discriminative learning to find a trade-off between these two approaches. Experiments on Asian Face Database show a reduction in classification error rate for our hybrid learning method.
digital image computing: techniques and applications | 2007
Shaokang Chen; Ting Shan; Brian C. Lovell
Face recognition is a very complex classification problem due to nuisance variations in different conditions. Normally no single classifier can discriminate patterns well when unpredictable variations and a huge number of classes are involved. Combining multiple classifiers can improve discriminability over the best single classifier. In this paper, we present a way to combine classifiers for face recognition problem based on APCA classifiers. The proposed combinator generates various classifiers by rotating various face spaces and fusing them by applying a weighted distance measure. The combined classifier is tested on the Asian Face Database with 856 images. Experiments show a 30% reduction in classification error rate of our combined classifier and illustrates that combining classifiers from different face spaces may perform better than those based on a single face space.
Archive | 2007
Ting Shan; Brian C. Lovell; Shaokang Chen
CCTV (Closed circuit TV) systems cover cities, public transport, and motorways, and the coverage is quite haphazard. It was public demand for security in public places that led to this pervasiveness. Moreover, the adoption of centralised digital video databases, largely to reduce management and monitoring costs, has also resulted in an extraordinary co-ordination of the CCTV resources. It is therefore natural to consider the power and usefulness of a distributed CCTV system, which could be extended not only to cover a city, but also to include virtually all video and still cameras on the planet. Such a system should not only include public CCTV systems in rail stations and city streets, but should also have the potential to include private CCTV systems in shopping malls and office buildings. With the advent of third generation (3G) wireless technology, there is no reason, in principle, that we could not include security cameras feeds from moving public spaces such as taxis, buses, and trains. There should also be the possibility of including the largest and cheapest potential source of image and video feeds which are those available from private mobile phone handsets with cameras. Many newer 3G handsets have both location service (GPS) and video capability, so the location of a phone could be determined and the video and image stream could be integrated into the views provided by the rest of the fixed sensor network. Another reason to investigate the ad-hoc integration of video and images from the mobile phone network into a planetary sensor network comes from a current project of the authors to use mobile smart phones as a low-cost secure medical triage system in the event of natural disasters. In 2005, a phone-based medical triage system being developed jointly by a commercial partner and the University of Queensland was used by medical officers in major natural disaster areas (ABC News 2005) in the aftermath of 1) the tsunami in Banda Aceh, Indonesia, 2) Hurricane Katrina in the USA, and 3) the earthquake in Kashmir, Pakistan. During these trials the need for the delivery of person location services based on robust face recognition through the mobile phone network became apparent. For example such a service could have proved invaluable to quickly reunite families and help determine the identities of missing persons. In major natural disasters, millions of people may be displaced and housed in temporary shelters, as was indeed the case after hurricane Katrina devastated New Orleans. In such extreme disasters is extremely difficult to rapidly determine who has survived and where they are physically located.
image and vision computing new zealand | 2007
Shaokang Chen; Ting Shan; Brian C. Lovell
Signal Processing Applications for Public Security and Forensics, 2007. SAFE '07. IEEE Workshop on | 2007
Abbas Bigdeli; Brian C. Lovell; Conrad Sanderson; Ting Shan; Shaokang Chen
Electronic Letters on Computer Vision and Image Analysis | 2007
Conrad Sanderson; Abbas Bigdeli; Ting Shan; Shaokang Chen; Erik Berglund; Brian C. Lovell