Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy D. Dixon is active.

Publication


Featured researches published by Timothy D. Dixon.


tests and proofs | 2006

Methods for the assessment of fused images

Timothy D. Dixon; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; David R. Bull; Cedric Nishan Canagarajah

The prevalence of image fusion---the combination of images of different modalities, such as visible and infrared radiation---has increased the demand for accurate methods of image-quality assessment. The current study used a signal-detection paradigm, identifying the presence or absence of a target in briefly presented images followed by an energy mask, which was compared with computational metric and subjective quality assessment results. In Study 1, 18 participants were presented with fused infrared-visible light images, with a soldier either present or not. Two independent variables, image-fusion method (averaging, contrast pyramid, dual-tree complex wavelet transform) and JPEG compression (no compression, low and high compression), were used in a repeated-measures design. Participants were presented with images and asked to state whether or not they detected the target. In addition, subjective ratings and metric results were obtained. This process was repeated in Study 2, using JPEG2000 compression. The results showed a significant effect for fusion but not compression in JPEG2000 images, while JPEG images showed significant effects for both fusion and compression. Subjective ratings differed, especially for JPEG2000 images, while metric results for both JPEG and JPEG2000 showed similar trends. These results indicate that objective and subjective ratings can differ significantly, and subjective ratings should, therefore, be used with care.


international conference on information fusion | 2006

Scanpath Analysis of Fused Multi-Sensor Images with Luminance Change: A Pilot Study

Timothy D. Dixon; Jian Li; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; John J. Lewis; Eduardo Fernández Canga; David R. Bull; Cedric Nishan Canagarajah

Image fusion is the process of combining images of differing modalities, such as visible and infrared (IR) images. Significant work has recently been carried out comparing methods of fused image assessment, with findings strongly suggesting that a task-centred approach would be beneficial to the assessment process. The current paper reports a pilot study analysing eye movements of participants involved in four tasks. The first and second tasks involved tracking a human figure wearing camouflage clothing walking through thick undergrowth at light and dark luminance levels, whilst the third and fourth task required tracking an individual in a crowd, again at two luminance levels. Participants were shown the original visible and IR images individually, pixel-averaged, contrast pyramid, and dual-tree complex wavelet fused video sequences. They viewed each display and sequence three times to compare inter-subject scanpath variability. This paper describes the initial analysis of the eye-tracking data gathered from the pilot study. These were also compared with computational metric assessment of the image sequences


tests and proofs | 2010

Quantifying fidelity for virtual environment simulations employing memory schema assumptions

Nicholaos Mourkoussis; Fiona M. Rivera; Tom Troscianko; Timothy D. Dixon; Rycharde Jeffery Hawkes; Katerina Mania

In a virtual environment (VE), efficient techniques are often needed to economize on rendering computation without compromising the information transmitted. The reported experiments devise a functional fidelity metric by exploiting research on memory schemata. According to the proposed measure, similar information would be transmitted across synthetic and real-world scenes depicting a specific schema. This would ultimately indicate which areas in a VE could be rendered in lower quality without affecting information uptake. We examine whether computationally more expensive scenes of greater visual fidelity affect memory performance after exposure to immersive VEs, or whether they are merely more aesthetically pleasing than their diminished visual quality counterparts. Results indicate that memory schemata function in VEs similar to real-world environments. “High-level” visual cognition related to late visual processing is unaffected by ubiquitous graphics manipulations such as polygon count and depth of shadow rendering; “normal” cognition operates as long as the scenes look acceptably realistic. However, when the overall realism of the scene is greatly reduced, such as in wireframe, then visual cognition becomes abnormal. Effects that distinguish schema-consistent from schema-inconsistent objects change because the whole scene now looks incongruent. We have shown that this effect is not due to a failure of basic recognition.


Journal of The Optical Society of America A-optics Image Science and Vision | 2007

Selection of image fusion quality measures: objective, subjective, and metric assessment

Timothy D. Dixon; Eduardo Fernández Canga; Stavri G. Nikolov; Tom Troscianko; Jan Noyes; C. Nishan Canagarajah; D.R. Bull

Accurate quality assessment of fused images, such as combined visible and infrared radiation images, has become increasingly important with the rise in the use of image fusion systems. We bring together three approaches, applying two objective tasks (local target analysis and global target location) to two scenarios, together with subjective quality ratings and three computational metrics. Contrast pyramid, shift-invariant discrete wavelet transform, and dual-tree complex wavelet transform fusion are applied, as well as levels of JPEG2000 compression. The differing tasks are shown to be more or less appropriate for differentiating among fusion methods, and future directions pertaining to the creation of task-specific metrics are explored.


international conference on information fusion | 2005

Characterisation of image fusion quality metrics for surveillance applications over bandlimited channels

Eduardo Fernández Canga; Stavri G. Nikolov; Cedric Nishan Canagarajah; David R. Bull; Timothy D. Dixon; Jan Noyes; Tom Troscianko

Image fusion is finding increasing application in areas such as medical imaging, remote sensing or military surveillance using sensor networks. Many of these applications demand highly compressed data combined with error resilient coding due to the characteristics of the communication channel. In this respect, JPEG2000 has many advantages over previous image coding standards. This paper evaluates and compares quality metrics for lossy compression using JPEG2000. Three representative image fusion algorithms: simple averaging, contrast pyramid and dual-tree complex wavelet transform based fusion have been considered. Numerous infrared and visible test images have been used. We compare these results with a psychophysical study where participants were asked to perform specific tasks and assess image fusion quality. The results show that there is a correlation between most of the metrics and the psychophysical evaluation. They also indicate that selection of the correct fusion method has more impact on performance than the presence of compression.


Information Fusion | 2010

Task-based scanpath assessment of multi-sensor video fusion in complex scenarios

Timothy D. Dixon; Stavri G. Nikolov; John J. Lewis; Jian Li; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; D.R. Bull; C. Nishan Canagarajah

The combining of visible light and infrared visual representations occurs naturally in some creatures, including the rattlesnake. This process, and the wide-spread use of multi-spectral multi-sensor systems, has influenced research into image fusion methods. Recent advances in image fusion techniques have necessitated the creation of novel ways of assessing fused images, which have previously focused on the use of subjective quality ratings combined with computational metric assessment. Previous work has shown the need to apply a task to the assessment process; the current work continues this approach by extending the novel use of scanpath analysis. In our experiments, participants were shown two video sequences, one in high luminance (HL) and one in low luminance (LL), both featuring a group of people walking around a clearing of trees. Each participant was shown visible and infrared (IR) inputs alone; and side-by-side (SBS); in an average (AVE) fused; a discrete wavelet transform (DWT) fused; and a dual-tree complex wavelet transform (DT-CWT) fused displays. Participants were asked to track one individual in each video sequence, as well as responding by key press when other individuals carried out secondary actions. Results showed the SBS display to lead to much poorer accuracy than the other displays, while reaction times in carrying out the secondary task favoured AVE in the HL sequence and DWT in the LL sequence. Results are discussed in relation to previous findings regarding item saliency and task demands, and the potential for comparative experiments evaluating human performance when viewing fused sequences against naturally occurring fusion processes such as the rattlesnake is highlighted.


Spatial Vision | 2007

Assessment of fused videos using scanpaths: a comparison of data analysis methods.

Timothy D. Dixon; Stavri G. Nikolov; John J. Lewis; Jian Li; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; D.R. Bull; Cedric Nishan Canagarajah

The increased interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. Previous work has often relied upon subjective quality ratings combined with some form of computational metric analysis. However, we have shown in previous work that such methods do not correlate well with how people perform in actual tasks utilising fused images. The current study presents the novel use of an eye-tracking paradigm to record how accurately participants could track an individual in various fused video displays. Participants were asked to track a man in camouflage outfit in various input videos (visible and infrared originals, a fused average of the inputs; and two different wavelet-based fused videos) whilst also carrying out a secondary button-press task. The results were analysed in two ways, once calculating accuracy across the whole video, and by dividing the video into three time sections based on video content. Although the pattern of results depends on the analysis, the accuracy for the inputs was generally found to be significantly worse than that for the fused displays. In conclusion, both approaches have good potential as new fused video assessment methods, depending on what task is carried out.


international conference on information fusion | 2007

Scanpath assessment of visible and infrared side-by-side and fused video displays

Timothy D. Dixon; Jian Li; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; John J. Lewis; Eduardo Fernández Canga; David R. Bull; Cedric Nishan Canagarajah

Advances in fusion of multi-sensor inputs have necessitated the creation of more sophisticated fused image assessment techniques. The current work extends previous studies investigating participant accuracy in tracking individuals in a video sequence. Participants were shown visible and IR videos individually and the two video inputs side-by-side, as well as averaged, discrete wavelet transform, and dual- tree complex wavelet transform fused videos. Two scenarios were shown to participants: one featured a camouflaged man walking down a pathway through foliage and across a clearing; the other featured several individuals moving around the clearing. The side-by-side scanpath data were analysed by studying how often participants looked at the visible and infrared sides, and analysing how accurately participants tracked the given target, and compared with previously analysed data. The results of this study are discussed in the context of wider applications to image assessment, and the potential for modelling human scanpath performance.


Journal of The Society for Information Display | 2006

Quality assessment of false-colored fused displays

Timothy D. Dixon; Eduardo Fernández Canga; Stavri G. Nikolov; Tom Troscianko; Jan Noyes; D.R. Bull; C. Nishan Canagarajah

Abstract— The problem of assessing the quality of fused images (composites created from inputs of differing modalities, such as infrared and visible light radiation) is an important and growing area of research. Recent work has shown that the process of assessing fused images should not rely entirely on subjective quality methods, with objective tasks and computational metrics having important contributions to the assessment procedure. The current paper extends previous findings, applying a psychophysical selection task, metric evaluation, and subjective quality judgments to a range of fused surveillance images. Fusion schemes included the contrast pyramid and shift invariant discrete wavelet transform (Experiment 1), the complex wavelet transform (Experiments 1 & 2), and two false-coloring methods (Experiment 2). In addition, JPEG2000 compression was applied at two levels, as well as an uncompressed control. Reaction time results showed the contrast pyramid to lead to slowest performance in the objective task, whilst the presence of color greatly reduced reaction times. These results differed from both the subjective and metric results. The findings support the view that subjective quality ratings should be used with caution, especially if not accompanied by some task.


Information Fusion | 2010

Towards cognitive image fusion

Alexander Toet; Maarten A. Hogervorst; Stavri G. Nikolov; John J. Lewis; Timothy D. Dixon; David R. Bull; Cedric Nishan Canagarajah

Collaboration


Dive into the Timothy D. Dixon's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Noyes

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D.R. Bull

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Li

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge