Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John J. Lewis is active.

Publication


Featured researches published by John J. Lewis.


Information Fusion | 2007

Pixel- and region-based image fusion with complex wavelets

John J. Lewis; Robert J. O'Callaghan; Stavri G. Nikolov; David R. Bull; Nishan Canagarajah

A number of pixel-based image fusion algorithms (using averaging, contrast pyramids, the discrete wavelet transform and the dual-tree complex wavelet transform (DT-CWT) to perform fusion) are reviewed and compared with a novel region-based image fusion method which facilitates increased flexibility with the definition of a variety of fusion rules. A DT-CWT is used to segment the features of the input images, either jointly or separately, to produce a region map. Characteristics of each region are calculated and a region-based approach is used to fuse the images, region-by-region, in the wavelet domain. This method gives results comparable to the pixel-based fusion methods as shown using a number of metrics. Despite an increase in complexity, region-based methods have a number of advantages over pixel-based methods. These include: the ability to use more intelligent semantic fusion rules; and for regions with certain properties to be attenuated or accentuated.


international conference on information fusion | 2006

Scanpath Analysis of Fused Multi-Sensor Images with Luminance Change: A Pilot Study

Timothy D. Dixon; Jian Li; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; John J. Lewis; Eduardo Fernández Canga; David R. Bull; Cedric Nishan Canagarajah

Image fusion is the process of combining images of differing modalities, such as visible and infrared (IR) images. Significant work has recently been carried out comparing methods of fused image assessment, with findings strongly suggesting that a task-centred approach would be beneficial to the assessment process. The current paper reports a pilot study analysing eye movements of participants involved in four tasks. The first and second tasks involved tracking a human figure wearing camouflage clothing walking through thick undergrowth at light and dark luminance levels, whilst the third and fourth task required tracking an individual in a crowd, again at two luminance levels. Participants were shown the original visible and IR images individually, pixel-averaged, contrast pyramid, and dual-tree complex wavelet fused video sequences. They viewed each display and sequence three times to compare inter-subject scanpath variability. This paper describes the initial analysis of the eye-tracking data gathered from the pilot study. These were also compared with computational metric assessment of the image sequences


Information Fusion | 2010

Task-based scanpath assessment of multi-sensor video fusion in complex scenarios

Timothy D. Dixon; Stavri G. Nikolov; John J. Lewis; Jian Li; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; D.R. Bull; C. Nishan Canagarajah

The combining of visible light and infrared visual representations occurs naturally in some creatures, including the rattlesnake. This process, and the wide-spread use of multi-spectral multi-sensor systems, has influenced research into image fusion methods. Recent advances in image fusion techniques have necessitated the creation of novel ways of assessing fused images, which have previously focused on the use of subjective quality ratings combined with computational metric assessment. Previous work has shown the need to apply a task to the assessment process; the current work continues this approach by extending the novel use of scanpath analysis. In our experiments, participants were shown two video sequences, one in high luminance (HL) and one in low luminance (LL), both featuring a group of people walking around a clearing of trees. Each participant was shown visible and infrared (IR) inputs alone; and side-by-side (SBS); in an average (AVE) fused; a discrete wavelet transform (DWT) fused; and a dual-tree complex wavelet transform (DT-CWT) fused displays. Participants were asked to track one individual in each video sequence, as well as responding by key press when other individuals carried out secondary actions. Results showed the SBS display to lead to much poorer accuracy than the other displays, while reaction times in carrying out the secondary task favoured AVE in the HL sequence and DWT in the LL sequence. Results are discussed in relation to previous findings regarding item saliency and task demands, and the potential for comparative experiments evaluating human performance when viewing fused sequences against naturally occurring fusion processes such as the rattlesnake is highlighted.


Spatial Vision | 2007

Assessment of fused videos using scanpaths: a comparison of data analysis methods.

Timothy D. Dixon; Stavri G. Nikolov; John J. Lewis; Jian Li; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; D.R. Bull; Cedric Nishan Canagarajah

The increased interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. Previous work has often relied upon subjective quality ratings combined with some form of computational metric analysis. However, we have shown in previous work that such methods do not correlate well with how people perform in actual tasks utilising fused images. The current study presents the novel use of an eye-tracking paradigm to record how accurately participants could track an individual in various fused video displays. Participants were asked to track a man in camouflage outfit in various input videos (visible and infrared originals, a fused average of the inputs; and two different wavelet-based fused videos) whilst also carrying out a secondary button-press task. The results were analysed in two ways, once calculating accuracy across the whole video, and by dividing the video into three time sections based on video content. Although the pattern of results depends on the analysis, the accuracy for the inputs was generally found to be significantly worse than that for the fused displays. In conclusion, both approaches have good potential as new fused video assessment methods, depending on what task is carried out.


international conference on information fusion | 2007

Scanpath assessment of visible and infrared side-by-side and fused video displays

Timothy D. Dixon; Jian Li; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; John J. Lewis; Eduardo Fernández Canga; David R. Bull; Cedric Nishan Canagarajah

Advances in fusion of multi-sensor inputs have necessitated the creation of more sophisticated fused image assessment techniques. The current work extends previous studies investigating participant accuracy in tracking individuals in a video sequence. Participants were shown visible and IR videos individually and the two video inputs side-by-side, as well as averaged, discrete wavelet transform, and dual- tree complex wavelet transform fused videos. Two scenarios were shown to participants: one featured a camouflaged man walking down a pathway through foliage and across a clearing; the other featured several individuals moving around the clearing. The side-by-side scanpath data were analysed by studying how often participants looked at the visible and infrared sides, and analysing how accurately participants tracked the given target, and compared with previously analysed data. The results of this study are discussed in the context of wider applications to image assessment, and the potential for modelling human scanpath performance.


Information Fusion | 2004

Region-based image fusion using complex wavelets

John J. Lewis; R. J. O’Callaghan; Stavri G. Nikolov; D.R. Bull; Cedric Nishan Canagarajah


Information Fusion | 2010

Towards cognitive image fusion

Alexander Toet; Maarten A. Hogervorst; Stavri G. Nikolov; John J. Lewis; Timothy D. Dixon; David R. Bull; Cedric Nishan Canagarajah


Archive | 2006

The Eden Project multi-sensor data set

John J. Lewis; Stavri G. Nikolov; Artur Loza; Eduardo Fernández Canga; Nedeljko Cvejic; Jonathan Qiang Li; Alessandro Cardinali; Cedric Nishan Canagarajah; David R. Bull; Tom A. D. Riley; David Hickman; Michael H. Smith


international conference on information fusion | 2006

Uni-Modal Versus Joint Segmentation for Region-Based Image Fusion

John J. Lewis; Stavri G. Nikolov; Cedric Nishan Canagarajah; David R. Bull; Alexander Toet


GI Jahrestagung (1) | 2006

The influence of multi-sensor video fusion on object tracking using a particle filter

Lyudmila Mihaylova; Artur Loza; Stavri G. Nikolov; John J. Lewis; Eduardo Fernández Canga; Jian Li; Timothy D. Dixon; Cedric Nishan Canagarajah; David R. Bull

Collaboration


Dive into the John J. Lewis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Li

University of Bristol

View shared research outputs
Top Co-Authors

Avatar

Jan Noyes

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D.R. Bull

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge