Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stavri G. Nikolov is active.

Publication


Featured researches published by Stavri G. Nikolov.


Information Fusion | 2007

Guest editorial: Image fusion: Advances in the state of the art

A. Ardeshir Goshtasby; Stavri G. Nikolov

Image fusion is the process of combining information from two or more images of a scene into a single composite image that is more informative and is more suitable for visual perception or computer processing. The objective in image fusion is to reduce uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Given the same set of input images, different fused images may be created depending on the specific application and what is considered relevant information. There are several benefits in using image fusion: wider spatial and temporal coverage, decreased uncertainty, improved reliability, and increased robustness of system performance. Often a single sensor cannot produce a complete representation of a scene. Visible images provide spectral and spatial details, and if a target has the same color and spatial characteristics as its background, it cannot be distinguished from the background. If visible images are fused with thermal images, a target that is warmer or colder than its background can be easily identified, even when its color and spatial details are similar to those of its background. Fused images can provide information that sometimes cannot be observed in the individual input images. Successful image fusion significantly reduces the amount of data to be viewed or processed without significantly reducing the amount of relevant information.


Information Fusion | 2007

Pixel- and region-based image fusion with complex wavelets

John J. Lewis; Robert J. O'Callaghan; Stavri G. Nikolov; David R. Bull; Nishan Canagarajah

A number of pixel-based image fusion algorithms (using averaging, contrast pyramids, the discrete wavelet transform and the dual-tree complex wavelet transform (DT-CWT) to perform fusion) are reviewed and compared with a novel region-based image fusion method which facilitates increased flexibility with the definition of a variety of fusion rules. A DT-CWT is used to segment the features of the input images, either jointly or separately, to produce a region map. Characteristics of each region are calculated and a region-based approach is used to fuse the images, region-by-region, in the wavelet domain. This method gives results comparable to the pixel-based fusion methods as shown using a number of metrics. Despite an increase in complexity, region-based methods have a number of advantages over pixel-based methods. These include: the ability to use more intelligent semantic fusion rules; and for regions with certain properties to be attenuated or accentuated.


Proceedings of the Royal Society of London B: Biological Sciences | 2007

Turning the other cheek: the viewpoint dependence of facial expression after-effects

Christopher P. Benton; Peter J. Etchells; Gillian Porter; Andrew P. Clark; Ian S. Penton-Voak; Stavri G. Nikolov

How do we visually encode facial expressions? Is this done by viewpoint-dependent mechanisms representing facial expressions as two-dimensional templates or do we build more complex viewpoint independent three-dimensional representations? Recent facial adaptation techniques offer a powerful way to address these questions. Prolonged viewing of a stimulus (adaptation) changes the perception of subsequently viewed stimuli (an after-effect). Adaptation to a particular attribute is believed to target those neural mechanisms encoding that attribute. We gathered images of facial expressions taken simultaneously from five different viewpoints evenly spread from the three-quarter leftward to the three-quarter rightward facing view. We measured the strength of expression after-effects as a function of the difference between adaptation and test viewpoints. Our data show that, although there is a decrease in after-effect over test viewpoint, there remains a substantial after-effect when adapt and test are at differing three-quarter views. We take these results to indicate that neural systems encoding facial expressions contain a mixture of viewpoint-dependent and viewpoint-independent elements. This accords with evidence from single cell recording studies in macaque and is consonant with a view in which viewpoint-independent expression encoding arises from a combination of view-dependent expression-sensitive responses.


computer vision and pattern recognition | 2007

The Effect of Pixel-Level Fusion on Object Tracking in Multi-Sensor Surveillance Video

Nedeljko Cvejic; Stavri G. Nikolov; Henry D. Knowles; Artur Loza; Alin Achim; David R. Bull; Cedric Nishan Canagarajah

This paper investigates the impact of pixel-level fusion of videos from visible (VIZ) and infrared (IR) surveillance cameras on object tracking performance, as compared to tracking in single modality videos. Tracking has been accomplished by means of a particle filter which fuses a colour cue and the structural similarity measure (SSIM). The highest tracking accuracy has been obtained in IR sequences, whereas the VIZ video showed the worst tracking performance due to higher levels of clutter. However, metrics for fusion assessment clearly point towards the supremacy of the multiresolutional methods, especially Dual Tree-Complex Wavelet Transform method. Thus, a new, tracking-oriented metric is needed that is able to accurately assess how fusion affects the performance of the tracker.


tests and proofs | 2006

Methods for the assessment of fused images

Timothy D. Dixon; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; David R. Bull; Cedric Nishan Canagarajah

The prevalence of image fusion---the combination of images of different modalities, such as visible and infrared radiation---has increased the demand for accurate methods of image-quality assessment. The current study used a signal-detection paradigm, identifying the presence or absence of a target in briefly presented images followed by an energy mask, which was compared with computational metric and subjective quality assessment results. In Study 1, 18 participants were presented with fused infrared-visible light images, with a soldier either present or not. Two independent variables, image-fusion method (averaging, contrast pyramid, dual-tree complex wavelet transform) and JPEG compression (no compression, low and high compression), were used in a repeated-measures design. Participants were presented with images and asked to state whether or not they detected the target. In addition, subjective ratings and metric results were obtained. This process was repeated in Study 2, using JPEG2000 compression. The results showed a significant effect for fusion but not compression in JPEG2000 images, while JPEG images showed significant effects for both fusion and compression. Subjective ratings differed, especially for JPEG2000 images, while metric results for both JPEG and JPEG2000 showed similar trends. These results indicate that objective and subjective ratings can differ significantly, and subjective ratings should, therefore, be used with care.


international conference on information fusion | 2006

Scanpath Analysis of Fused Multi-Sensor Images with Luminance Change: A Pilot Study

Timothy D. Dixon; Jian Li; Jan Noyes; Tom Troscianko; Stavri G. Nikolov; John J. Lewis; Eduardo Fernández Canga; David R. Bull; Cedric Nishan Canagarajah

Image fusion is the process of combining images of differing modalities, such as visible and infrared (IR) images. Significant work has recently been carried out comparing methods of fused image assessment, with findings strongly suggesting that a task-centred approach would be beneficial to the assessment process. The current paper reports a pilot study analysing eye movements of participants involved in four tasks. The first and second tasks involved tracking a human figure wearing camouflage clothing walking through thick undergrowth at light and dark luminance levels, whilst the third and fourth task required tracking an individual in a crowd, again at two luminance levels. Participants were shown the original visible and IR images individually, pixel-averaged, contrast pyramid, and dual-tree complex wavelet fused video sequences. They viewed each display and sequence three times to compare inter-subject scanpath variability. This paper describes the initial analysis of the eye-tracking data gathered from the pilot study. These were also compared with computational metric assessment of the image sequences


eye tracking research & application | 2004

Gaze-contingent display using texture mapping and OpenGL: system and applications

Stavri G. Nikolov; Timothy D. Newman; D.R. Bull; Nishan Canagarajah; Mg Jones; Iain D. Gilchrist

This paper describes a novel gaze-contingent display (GCD) using texture mapping and OpenGL. This new system has a number of key features: (a) it is platform independent, i.e. it runs on different computers and under different operating systems; (b) it is eyetracker independent, since it provides an interactive focus+context display that can be easily integrated with any eye-tracker that provides real-time 2-D gaze estimation; (c) it is flexible in that it provides for straightforward modification of the main GCD parameters, including size and shape of the window and its border; and (d) through the use of OpenGL extensions it can perform local real-time image analysis within the GCD window. The new GCD system implementation is described in detail and some performance figures are given. Several applications of this system are studied, including gaze-contingent multi-resolution displays, gaze-contingent multi-modality displays, and gaze-contingent image analysis.


Journal of The Optical Society of America A-optics Image Science and Vision | 2007

Selection of image fusion quality measures: objective, subjective, and metric assessment

Timothy D. Dixon; Eduardo Fernández Canga; Stavri G. Nikolov; Tom Troscianko; Jan Noyes; C. Nishan Canagarajah; D.R. Bull

Accurate quality assessment of fused images, such as combined visible and infrared radiation images, has become increasingly important with the rise in the use of image fusion systems. We bring together three approaches, applying two objective tasks (local target analysis and global target location) to two scenarios, together with subjective quality ratings and three computational metrics. Contrast pyramid, shift-invariant discrete wavelet transform, and dual-tree complex wavelet transform fusion are applied, as well as levels of JPEG2000 compression. The differing tasks are shown to be more or less appropriate for differentiating among fusion methods, and future directions pertaining to the creation of task-specific metrics are explored.


international conference on pattern recognition | 2000

Fusion of 2-D images using their multiscale edges

Stavri G. Nikolov; David R. Bull; Cedric Nishan Canagarajah; Michael Halliwell; Peter N. T. Wells

A new framework for fusion of 2D images based on their multiscale edges is described in this paper. The new method uses the multiscale edge representation of images proposed by Mallat and Hwang (1992). The input images are fused using their multiscale edges only. Two different algorithms for fusing the point representations and the chain representations of the multiscale edges (wavelet transform modulus maxima) are given. The chain representation has been found to provide numerous new alternatives for image fusion, since edge graph fusion techniques can be employed to combine the images. The new framework encompasses different levels, i.e. pixel and feature levels, of image fusion in the wavelet domain.


international conference on information fusion | 2000

2-D image fusion by multiscale edge graph combination

Stavri G. Nikolov; D.R. Bull; Cedric Nishan Canagarajah; M. Halliwell; P.N.T. Wells

A new method for fusion of 2-D images based on combination of graphs formed by their multiscale edges is presented in this paper. The new method uses the multiscale edge representation of images proposed by Mallat and Hwang. The input images are fused using their multiscale edges only. Two different algorithms for fusing the point representations and the chain representations of the multiscale edges (wavelet transform modulus maxima) are described. The chain representation has been found to provide numerous options for image fusion as edge graph fusion techniques can be employed to combine the images. The new fusion scheme is compared to other fusion techniques when applied to out-of-focus images. Two other applications of the new method to remote sensing images and medical (CT and MR) images are also given in the paper.

Collaboration


Dive into the Stavri G. Nikolov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Noyes

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D.R. Bull

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Li

University of Bristol

View shared research outputs
Researchain Logo
Decentralizing Knowledge