Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Ogunbona is active.

Publication


Featured researches published by Philip Ogunbona.


IEEE Transactions on Image Processing | 2001

Signal analysis using a multiresolution form of the singular value decomposition

Ramakrishna Kakarala; Philip Ogunbona

This paper proposes a multiresolution form of the singular value decomposition (SVD) and shows how it may be used for signal analysis and approximation. It is well-known that the SVD has optimal decorrelation and subrank approximation properties. The multiresolution form of SVD proposed here retains those properties, and moreover, has linear computational complexity. By using the multiresolution SVD, the following important characteristics of a signal may he measured, at each of several levels of resolution: isotropy, sphericity of principal components, self-similarity under scaling, and resolution of mean-squared error into meaningful components. Theoretical calculations are provided for simple statistical models to show what might be expected. Results are provided with real images to show the usefulness of the SVD decomposition.


acm workshop on multimedia and security | 2001

On multiple watermarking

Nicholas Paul Sheppard; Reihaneh Safavi-Naini; Philip Ogunbona

Mintzer and Braudaway [6] once asked: If one watermark is good, are more better? In this paper, we discuss some techniques for embedding multiple watermarks into a single multimedia object and report some observations on implementations of these techniques.


IEEE Transactions on Human-Machine Systems | 2016

Action Recognition From Depth Maps Using Deep Convolutional Neural Networks

Pichao Wang; Wanqing Li; Zhimin Gao; Jing Zhang; Chang Tang; Philip Ogunbona

This paper proposes a new method, i.e., weighted hierarchical depth motion maps (WHDMM) + three-channel deep convolutional neural networks (3ConvNets), for human action recognition from depth maps on small training datasets. Three strategies are developed to leverage the capability of ConvNets in mining discriminative features for recognition. First, different viewpoints are mimicked by rotating the 3-D points of the captured depth maps. This not only synthesizes more data, but also makes the trained ConvNets view-tolerant. Second, WHDMMs at several temporal scales are constructed to encode the spatiotemporal motion patterns of actions into 2-D spatial structures. The 2-D spatial structures are further enhanced for recognition by converting the WHDMMs into pseudocolor images. Finally, the three ConvNets are initialized with the models obtained from ImageNet and fine-tuned independently on the color-coded WHDMMs constructed in three orthogonal planes. The proposed algorithm was evaluated on the MSRAction3D, MSRAction3DExt, UTKinect-Action, and MSRDailyActivity3D datasets using cross-subject protocols. In addition, the method was evaluated on the large dataset constructed from the above datasets. The proposed method achieved 2-9% better results on most of the individual datasets. Furthermore, the proposed method maintained its performance on the large dataset, whereas the performance of existing methods decreased with the increased number of actions.


Pattern Recognition | 2016

RGB-D-based action recognition datasets

Jing Zhang; Wanqing Li; Philip Ogunbona; Pichao Wang; Chang Tang

Human action recognition from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention since the first work reported in 2010. Over this period, many benchmark datasets have been created to facilitate the development and evaluation of new algorithms. This raises the question of which dataset to select and how to use it in providing a fair and objective comparative evaluation against state-of-the-art methods. To address this issue, this paper provides a comprehensive review of the most commonly used action recognition related RGB-D video datasets, including 27 single-view datasets, 10 multi-view datasets, and 7 multi-person datasets. The detailed information and analysis of these datasets is a useful resource in guiding insightful selection of datasets for future research. In addition, the issues with current algorithm evaluation vis-a-vis limitations of the available datasets and evaluation protocols are also highlighted; resulting in a number of recommendations for collection of new datasets and use of evaluation protocols. HighlightsA detailed review and in-depth analysis of 44 publicly available RGB-D-based action datasets.Recommendations on the selection of datasets and evaluation protocols for use in future research.Identification of some limitations of these datasets and evaluation protocols.Recommendations on future creation of datasets and use of evaluation protocols.


Pattern Recognition Letters | 2008

An efficient iterative algorithm for image thresholding

Liju Dong; Ge Yu; Philip Ogunbona; Wanqing Li

Thresholding is a commonly used technique for image segmentation. This paper presents an efficient iterative algorithm for finding optimal thresholds that minimize a weighted sum-of-squared-error objective function. We have proven that the proposed algorithm is mathematically equivalent to the well-known Otsus method, but requires much less computation. The computational complexity of the proposed algorithm is linear with respect to the number of thresholds to be calculated as against the exponential complexity of the Otsus algorithm. Experimental results have verified the theoretical analysis and the efficiency of the proposed algorithm.


Pattern Recognition | 2013

A novel shape-based non-redundant local binary pattern descriptor for object detection

Duc Thanh Nguyen; Philip Ogunbona; Wanqing Li

Motivated by the discriminative ability of shape information and local patterns in object recognition, this paper proposes a window-based object descriptor that integrates both cues. In particular, contour templates representing object shape are used to derive a set of so-called key points at which local appearance features are extracted. These key points are located using an improved template matching method that utilises both spatial and orientation information in a simple and effective way. At each of the extracted key points, a new local appearance feature, namely non-redundant local binary pattern (NR-LBP), is computed. An object descriptor is formed by concatenating the NR-LBP features from all key points to encode the shape as well as the appearance of the object. The proposed descriptor was extensively tested in the task of detecting humans from static images on the commonly used MIT and INRIA datasets. The experimental results have shown that the proposed descriptor can effectively describe non-rigid objects with high articulation and improve the detection rate compared to other state-of-the-art object descriptors.


international conference on image processing | 2010

Object detection using Non-Redundant Local Binary Patterns

Duc Thanh Nguyen; Zhimin Zong; Philip Ogunbona; Wanqing Li

Local Binary Pattern (LBP) as a descriptor, has been successfully used in various object recognition tasks because of its discriminative property and computational simplicity. In this paper a variant of the LBP referred to as Non-Redundant Local Binary Pattern (NRLBP) is introduced and its application for object detection is demonstrated. Compared with the original LBP descriptor, the NRLBP has advantage of providing a more compact description of objects appearance. Furthermore, the NRLBP is more discriminative since it reflects the relative contrast between the background and foreground. The proposed descriptor is employed to encode humans appearance in a human detection task. Experimental results show that the NRLBP is robust and adaptive with changes of the background and foreground and also outperforms the original LBP in detection task.


IEEE Transactions on Image Processing | 1997

On the computational complexity of the LBG and PNN algorithms

Jamshid Shanbehzadeh; Philip Ogunbona

This correspondence compares the computational complexity of the pair-wise nearest neighbor (PNN) and Linde-Buzo-Gray (LBG) algorithms by deriving analytical expressions for their computational times. It is shown that for a practical codebook size and training vector sequence, the LBG algorithm is indeed more computationally efficient than the PNN algorithm.


acm multimedia | 2015

ConvNets-Based Action Recognition from Depth Maps through Virtual Cameras and Pseudocoloring

Pichao Wang; Wanqing Li; Zhimin Gao; Chang Tang; Jing Zhang; Philip Ogunbona

In this paper, we propose to adopt ConvNets to recognize human actions from depth maps on relatively small datasets based on Depth Motion Maps (DMMs). In particular, three strategies are developed to effectively leverage the capability of ConvNets in mining discriminative features for recognition. Firstly, different viewpoints are mimicked by rotating virtual cameras around subject represented by the 3D points of the captured depth maps. This not only synthesizes more data from the captured ones, but also makes the trained ConvNets view-tolerant. Secondly, DMMs are constructed and further enhanced for recognition by encoding them into Pseudo-RGB images, turning the spatial-temporal motion patterns into textures and edges. Lastly, through transferring learning the models originally trained over ImageNet for image classification, the three ConvNets are trained independently on the color-coded DMMs constructed in three orthogonal planes. The proposed algorithm was extensively evaluated on MSRAction3D, MSRAction3DExt and UTKinect-Action datasets and achieved the state-of-the-art results on these datasets.


Pattern Recognition | 2016

Human detection from images and videos

Duc Thanh Nguyen; Wanqing Li; Philip Ogunbona

The problem of human detection is to automatically locate people in an image or video sequence and has been actively researched in the past decade. This paper aims to provide a comprehensive survey on the recent development and challenges of human detection. Different from previous surveys, this survey is organised in the thread of human object descriptors. This approach has advantages in providing a thorough analysis of the state-of-the-art human detection methods and a guide to the selection of appropriate methods in practical applications. In addition, challenges such as occlusion and real-time human detection are analysed. The commonly used evaluation of human detection methods such as the datasets, tools, and performance measures are presented and future research directions are highlighted. HighlightsA review on the state-of-the-art of human detection.This review is organised in the thread of human object descriptors.Challenges such as occlusion and real-time human detection are analysed.The commonly used datasets, tools, and performance measures are presented.Open issues and future research directions are highlighted.A guide to the selection of detection methods for applications is provided.

Collaboration


Dive into the Philip Ogunbona's collaboration.

Top Co-Authors

Avatar

Wanqing Li

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ce Zhan

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Pichao Wang

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Jing Zhang

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Golshah Naghdy

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Lei Wang

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chang Tang

China University of Geosciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge