Christopher Hollitt
Victoria University of Wellington
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christopher Hollitt.
Publications of the Astronomical Society of Australia | 2012
Christopher Hollitt; M. Johnston-Hollitt
While automatic detection of point sources in astronomical images has experienced a great degree of success, less effort has been directed towards the detection of extended and low-surface-brightness features. At present, existing telescopes still rely on human expertise to reduce the raw data to usable images and then to analyse the images for non-pointlike objects. However, the next generation of radio telescopes will generate unprecedented volumes of data making manual data reduction and object extraction infeasible. Without developing new methods of automatic detection for extended and diffuse objects such as supernova remnants, bent-tailed galaxies, radio relics and halos, a wealth of scientifically important results will not be uncovered. In this paper we explore the response of the Circle Hough Transform to a representative sample of different extended circular or arc-like astronomical objects. We also examine the response of the Circle Hough Transform to input images containing noise alone and inputs including point sources.
image and vision computing new zealand | 2009
Christopher Hollitt
The Hough transform finds wide application in machine and robot vision. The family of Hough transforms allows a variety of geometric objects to be located and described in an image. However, the classical Hough transform is computationally complex when targeting complex objects. This renders the Hough transform unsuitable for many real-time applications. We present a new algorithm for calculating the circle Hough transform by recasting it as a convolution. This new approach allows the transform to be calculated using the Fast Fourier transform, yielding an algorithm with lower computational complexity than the traditional approach. Although the convolution approach is applicable to the same range of different targets as the traditional Hough transform, we limit ourselves to a consideration of the circle Hough transform in this treatment.
machine vision applications | 2013
Christopher Hollitt
The Hough transform is a well-established family of algorithms for locating and describing geometric figures in an image. However, the computational complexity of the algorithm used to calculate the transform is high when used to target complex objects. As a result, the use of the Hough transform to find objects more complex than lines is uncommon in real-time applications. We describe a convolution method for calculating the Hough transform for finding circles of arbitrary radius. The algorithm operates by performing a three-dimensional convolution of the input image with an appropriate Hough kernel. The use of the fast Fourier transform to calculate the convolution results in a Hough transform algorithm with reduced computational complexity and thus increased speed. Edge detection and other convolution-based image processing operations can be incorporated as part of the transform, which removes the need to perform them with a separate pre-processing or post-processing step. As the Discrete Fourier Transform implements circular convolution rather than linear convolution, consideration must be given to padding the input image before forming the Hough transform.
Pattern Recognition | 2016
Syed Saud Naqvi; Will N. Browne; Christopher Hollitt
A number of pro-superpixel based saliency models have recently been proposed, which segment the image into small perceptually homogeneous regions before saliency computation. Such approaches ignore important object properties, resulting in inappropriate object annotations and considerably different saliency assignment to the various regions of an object. Although previous techniques employ multi-scale saliency maps in an attempt to rectify this problem, it becomes difficult to retain the characteristics of proto-objects after the first stage of processing. We introduce matting components based saliency to address the problems of inappropriate object annotations and inappropriate saliency assignment to object regions. The matting components account for proto-object properties by employing object aware spectral segmentation. To complement the matting component based saliency, we also employ the smallest eigenvectors of a matting Laplacian matrix. Color spatial distribution features are employed to capture global relationships at the pixel-level and assist the process of matting components based saliency computation. A novel joint optimization framework is introduced to fuse the features and learn important associated parameters. The contributions of the proposed approach are two-fold. The first contribution is the introduction of proto-objects aware spectral segmentation to obtain an accurate foreground saliency. The second contribution is the joint optimization of important parameters in conjunction with learning feature importance. In contrast to superpixel based approaches, the proposed model is able to completely annotate salient objects and assign similar saliency to various regions of the salient object. Moreover, the proposed approach shows robust and efficient performance across five challenging benchmark datasets when compared with 10 recently proposed state-of-the-art saliency detection models. Display Omitted HighlightsA novel method for visual saliency using matting components of matting Laplacian.New approach for joint optimization of bottom-up parameters and feature importance.Proposed model is able to accurately annotate salient objects.
Pattern Recognition | 2016
Muhammad Iqbal; Syed Saud Naqvi; Will N. Browne; Christopher Hollitt; Mengjie Zhang
Salient object detection is the task of automatically localizing objects of interests in a scene by suppressing the background information, which facilitates various machine vision applications such as object segmentation, recognition and tracking. Combining features from different feature-modalities has been demonstrated to enhance the performance of saliency prediction algorithms and different feature combinations are often suited to different types of images. However, existing saliency learning techniques attempt to apply a single feature combination across all image types and thus lose generalization in the test phase when considering unseen images. Learning classifier systems (LCSs) are an evolutionary machine learning technique that evolve a set of rules, based on a niched genetic reproduction, which collectively solve the problem. It is hypothesized that the LCS technique has the ability to autonomously learn different feature combinations for different image types. Hence, this paper further investigates the application of LCS for learning image dependent feature fusion strategies for the task of salient object detection. The obtained results show that the proposed method outperforms, through evolving generalized rules to compute saliency maps, the individual feature based methods and seven combinatorial techniques in detecting salient objects from three well known benchmark datasets of various types and difficulty levels. HighlightsIncorporated a novel input instance matching scheme in a learning classifier system.Effectively learned different feature combinations for various types of images.Outperformed nine individual feature based methods and seven combinatorial methods.The new method preserves more details of objects than the state-of-the-art methods.
IEEE Transactions on Image Processing | 2016
Syed Saud Naqvi; Will N. Browne; Christopher Hollitt
Salient object detection is typically accomplished by combining the outputs of multiple primitive feature detectors (that output feature maps or features). The diversity of images means that different basic features are useful in different contexts, which motivates the use of complementary feature detectors in a general setting. However, naive inclusion of features that are not useful for a particular image leads to a reduction in performance. In this paper, we introduce four novel measures of feature quality and then use those measures to dynamically select useful features for the combination process. The resulting saliency is thereby individually tailored to each image. Using benchmark data sets, we demonstrate the efficacy of our dynamic feature selection system by measuring the performance enhancement over the state-of-the-art models for complementary feature selection and saliency aggregation tasks. We show that a salient object detection technique using our approach outperforms competitive models on the PASCAL VOC 2012 dataset. We find that the most pronounced performance improvements occur in challenging images with cluttered backgrounds, or containing multiple salient objects.
machine vision applications | 2016
Ibrahim M. H. Rahman; Christopher Hollitt; Mengjie Zhang
Target detection using attention models has recently become a major research topic in active vision. One of the major problems in this area of research is how to appropriately weight low-level features to get high quality top-down saliency maps that highlight target objects. Learning of such weights has previously been done using example images having similar feature distributions without considering contextual information. In this paper, we propose a model that we refer to as the top-down contextual weighting (TDCoW) that incorporates high-level knowledge of the gist context of images to apply appropriate weights to the features. The proposed model is tested on four challenging datasets, two for cricket balls, one for bikes and one for person detection. The obtained results show the effectiveness of contextual information for modelling the TD saliency by producing better feature weights than those produced without contextual information.
image and vision computing new zealand | 2012
Christopher Hollitt; Ahmed Sheikh Deeb
Many machine vision tasks require that there be an adequate estimate of the vertical direction. Many environments have a relative richness of vertical features, so extracting the favoured direction allows the vertical direction to be estimated. In this paper two different techniques are used to extract the orientation of image features, one based on Fourier transforms, the other on Hough transforms. The two techniques in combination are shown to provide promising estimates of the vertical in a variety of indoor environments.
Medical Engineering & Physics | 2012
Asim Masood; Paul D. Teal; Christopher Hollitt
The Cochlear Microphonic is one of the electrical potentials generated by the ear in response to audible stimuli. It is very difficult to measure the CM non-invasively because it has a very small magnitude (less than 1 μV). A high Common Mode Rejection Ratio (CMRR) and very large bandwidth (5 Hz-20 kHz) biomedical amplifier system is presented to measure the signal. The system also uses a driven right leg circuit to increase the CMRR.
image and vision computing new zealand | 2015
A. Roberts; Will N. Browne; Christopher Hollitt
Measuring distances accurately can be costly. To overcome complexity and cost of distance measurement equipment modified monocular optical system is proposed which uses only commonly available hardware. Pattern is printed on the marker to improve distance measurement accuracy. Results of simulations of proposed system show two orders of magnitude possible improvement in measurement precision.