Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Connah is active.

Publication


Featured researches published by David Connah.


Journal of The Optical Society of America A-optics Image Science and Vision | 2005

Characterization of trichromatic color cameras by using a new multispectral imaging technique.

Vien Cheung; Jon Yngve Hardeberg; David Connah; Stephen Westland

We investigate methods for the recovery of reflectance spectra from the responses of trichromatic camera systems and the application of these methods to the problem of camera characterization. The recovery of reflectance from colorimetric data is an ill-posed problem, and a unique solution requires additional constraints. We introduce a novel method for reflectance recovery that finds the smoothest spectrum consistent with both the colorimetric data and a linear model of reflectance. Four multispectral methods were tested using data from a real trichromatic camera system. The new method gave the lowest maximum colorimetric error in terms of camera characterization with test data that were independent of the training data. However, the average colorimetric performances of the four multispectral methods were statistically indistinguishable from each other but were significantly worse than conventional methods for camera characterization such as polynomial transforms.


color imaging conference | 2005

Spectral recovery using polynomial models

David Connah; Jon Yngve Hardeberg

In this paper we apply polynomial models to the problem of reflectance recovery for both three-channel and multispectral imaging systems. The results suggest that the technique is superior in terms of accuracy to a standard linear transform and its generalisation performance is equivalent provided that some regularisation is employed. The experiments with the multispectral system suggest that this advantage is reduced when the number of sensors are increased.


european conference on computer vision | 2014

Spectral Edge Image Fusion: Theory and Applications

David Connah; Mark S. Drew; Graham D. Finlayson

This paper describes a novel approach to the fusion of multidimensional images for colour displays. The goal of the method is to generate an output image whose gradient matches that of the input as closely as possible. It achieves this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is subsequently reintegrated to generate an output. Constraints on the output colours are provided by an initial RGB rendering to produce ‘naturalistic’ colours: we provide a theorem for projecting higher-D contrast onto the initial colour gradients such that they remain close to the original gradients whilst maintaining exact high-D contrast. The solution to this constrained optimisation is closed-form, allowing for a very simple and hence fast and efficient algorithm. Our approach is generic in that it can map any N-D image data to any M-D output, and can be used in a variety of applications using the same basic algorithm. In this paper we focus on the problem of mapping N-D inputs to 3-D colour outputs. We present results in three applications: hyperspectral remote sensing, fusion of colour and near-infrared images, and colour visualisation of MRI Diffusion-Tensor imaging.


international conference on image processing | 2004

Comparison of linear spectral reconstruction methods for multispectral imaging

David Connah; Jon Yngve Hardeberg; Stephen Westland

This paper compares the performance of a number of linear methods for reflectance estimation from digital camera responses. Methods based upon smoothness maximisation, linear models of reflectance and least-squares fitting are compared using two simulated 6-channel camera systems. In our experiments, the smoothness methods were generally found to deliver the best performance on test data. Furthermore, they deliver equivalent performance on training data, even compared to those methods that make explicit use of a priori knowledge of the training data.


IEEE Transactions on Image Processing | 2011

Lookup-Table-Based Gradient Field Reconstruction

Graham D. Finlayson; David Connah; Mark S. Drew

In computer vision, there are many applications, where it is advantageous to process an image in the gradient domain and then reintegrate the gradient field: important examples include shadow removal, lightness calculation, and data fusion. A serious problem with this approach is that the reconstruction step often introduces artefacts-commonly, smoothed and smeared edges-to the recovered image. This is a result of the inherent ill-posedness of reintegrating a nonintegrable field. Artefacts can be diminished but not removed, by using complex to highly complex reintegration techniques. Here, we present a remarkably simple (and on the face of it naive) algorithm for reconstructing gradient fields. Suppose we start with a multichannel original, and from it derive a (possibly one of many) 1-D gradient field; for many applications, the derived gradient field will be nonintegrable. Here, we propose a lookup-table-based map relating the multichannel original to a reconstructed scalar output image, whose gradient best matches the target gradient field. The idea, at base, is that if we learn how to map the gradients of the multichannel original onto the desired output gradient, and then using the lookup table (LUT) constraint, we effectively derive the mapping from the multichannel input to the desired, reintegrated, image output. While this map could take a variety of forms, here we derive the best map from the multichannel gradient as a (nonlinear) function of the input to each of the target scalar gradients. In this framework, reconstruction is a simple equation-solving exercise of low dimensionality. One obvious application of our method is to the image-fusion problem, e.g., the problem of converting a color or higher-D image into grayscale. We will show, through extensive experiments and complementary theoretical arguments, that our straightforward method preserves the target contrast as well as do complex previous reintegration methods, but without artefacts, and with a substantially cheaper computational cost. Finally, we demonstrate the generality of the method by applying it to gradient field reconstruction in an additional area, the shading recovery problem.


electronic imaging | 2009

Improved colour to greyscale via integrability correction

Mark S. Drew; David Connah; Graham D. Finlayson; Marina Bloj

The classical approach to converting colour to greyscale is to code the luminance signal as a grey value image. However, the problem with this approach is that the detail at equiluminant edges vanishes, and in the worst case the greyscale reproduction of an equiluminant image is a single uniform grey value. The solution to this problem, adopted by all algorithms in the field, is to try to code colour difference (or contrast) in the greyscale image. In this paper we reconsider the Socolinsky and Wolff algorithm for colour to greyscale conversion. This algorithm, which is the most mathematically elegant, often scores well in preference experiments but can introduce artefacts which spoil the appearance of the final image. These artefacts are intrinsic to the method and stem from the underlying approach which computes a greyscale image by a) calculating approximate luminance-type derivatives for the colour image and b) re-integrating these to obtain a greyscale image. Unfortunately, the sign of the derivative vector is sometimes unknown on an equiluminant edge and, in the current theory, is set arbitrarily. However, choosing the wrong sign can lead to unnatural contrast gradients (not apparent in the colour original). Our contribution is to show how this sign problem can be ameliorated using a generalised definition of luminance and a Markov relaxation.


Optics Express | 2016

Improved method for skin reflectance reconstruction from camera images

Kaida Xiao; Yuteng Zhu; David Connah; Julian M. Yates; Sophie M. Wuerger

A improved spectral reflectance reconstruction method is developed to transform camera RGB to spectral reflectance for skin images. Rather than using conventional direct or two-step processes, we transform camera RGB to skin reflectance directly using a principal component analysis (PCA) approach. The novelty in our direct method (RGB to spectra) is the use of a skin-specific colour characterisation chart with spectra closer to human skin spectra, and a new database of skin reflectances to derive the PCA bases. The experimental results using the facial images of 17 subjects demonstrate that our new direct method gives a significantly better performance than conventional, two-step methods and direct methods with traditional characterization charts. This new spectral reconstruction algorithm is sufficiently precise to reconstruct spectral properites relating to chromophores and its performance is within the acceptable range for maxillofacial soft tissue prostheses (error < 3 ΔE*ab units).


Journal of Electronic Imaging | 2016

Automatic age and gender classification using supervised appearance model

Ali Maina Bukar; Hassan Ugail; David Connah

Abstract. Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.


international conference on image processing | 2015

Individualised model of facial age synthesis based on constrained regression

Ali Maina Bukar; Hassan Ugail; David Connah

Faces convey much information. Interestingly we humans have a remarkable ability of identifying, extracting, and interpreting this information. Recently automatic facial ageing (AFA) has gained popularity due to its numerous applications which include search for missing people, biometrics, and multimedia. The problem of AFA is faced with various challenges, including incomplete training datasets, unrestrained environments, ethnic and gender variations to mention but a few. This work presents a new approach to automatic facial ageing which involves the development of a person specific facial ageing system. A color based Active Appearance Model (AAM) is used to extract facial features. Then, regression is used to model an age estimator. Age synthesis is achieved by computing a solution that minimises the distance from the original face with the use of constrained regression. The model is tested on a challenging database of single image per person. Initial results suggest that plausible images can be rerendered at different ages, automatically using the AAM representation. Using the constrained regressor we are guaranteed to get estimated ages that are exact for an individual at a given age.


human vision and electronic imaging conference | 2005

Can gamut mapping quality be predicted by colour image difference formulae

Eriko Bando; Jon Yngve Hardeberg; David Connah

We carried out a CRT monitor based psychophysical experiment to investigate the quality of three colour image difference metrics, the CIEΔE ab equation, the iCAM and the S-CIELAB metrics. Six original images were reproduced through six gamut mapping algorithms for the observer experiment. The result indicates that the colour image difference calculated by each metric does not directly relate to perceived image difference.

Collaboration


Dive into the David Connah's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marina Bloj

University of Bradford

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark S. Drew

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark S. Drew

Simon Fraser University

View shared research outputs
Researchain Logo
Decentralizing Knowledge