Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where D.R. Bull is active.

Publication


Featured researches published by D.R. Bull.


international symposium on circuits and systems | 2004

Multiple description video coding based on zero padding

D. Wang; Nishan Canagarajah; David W. Redmill; D.R. Bull

This paper proposes a simple multiple description video coding approach based on zero padding theory. It is completely based on pre- and post-processing, which require no modifications to the source codec. Redundancy is added by padding zeros in the DCT domain, which results in interpolation of the original frame and increases correlations between pixels. Methods based on 1D and 2D DCT are presented. We also investigate two sub-sampling methods, which are interleaved and quincunx, to generate multiple descriptions. Results are presented for two zero padding approaches using H.264, which shows that the 1D approach performs much better than 2D padding techniques, at a much lower computational complexity. For 1D zero padding, results show that interleaved sub-sampling is better than quincunx.


international conference on electronics circuits and systems | 2003

Perceptually optimised sign language video coding

Dimitris Agrafiotis; Nishan Canagarajah; D.R. Bull

Mobile video telephony will enable deaf people to communicate in their own language, sign language. At low bit rates coding of sign language video is challenging due the high levels of motion and the need to maintain good image quality to aid with understanding. This paper presents perceptually optimised coding of sign language video at low bit rates. The proposed optimisations are based on an eye-tracking study that we have conducted with the aim of characterising the visual attention of sign language viewers. Analysis and results of this study and two coding methods, one using MPEG-4 video objects and the second using foveation filtering, are presented. Results with foveation filtering are promising, offering a considerable decrease in bit rate in a manner compatible with the visual attention patterns of deaf people, as these were recorded in the eye tracking study.


international symposium on circuits and systems | 2006

Mode refinement algorithm for H.264 intra frame requantization

Damien Lefol; D.R. Bull; Nishan Canagarajah

The latest video coding standard H.264 has been recently approved and has already been adopted for numerous applications including HD-DVD and satellite broadcast. To allow interconnectivity between different applications using H.264, transcoding will be a key factor. When requantizing a bitstream the incoming coding decisions are usually kept unchanged to reduce the complexity, but it can have a major impact on the coding efficiency. This paper proposes a novel algorithm for mode refinement of inter prediction in the case of requantization of H.264 bitstreams. The proposed approach gives a comparable quality to a full search for a fraction of its complexity by exploiting the statistical properties of the mode distribution and motion vector refinement.


international conference on image processing | 2005

Error concealment for slice group based multiple description video coding

D. Wang; Nishan Canagarajah; Dimitris Agrafiotis; D.R. Bull

This paper develops error concealment methods for multiple description video coding (MDC) in order to adapt to error prone packet networks. The three-loop slice group MDC approach of D. Wang et al. (2005) is used. MDC is very suitable for multiple channel environments, and especially able to maintain acceptable quality when some of these channels fail completely, i.e. in an on-off MDC environment, without experiencing any drifting problem. Our MDC scheme coupled with the proposed concealment approaches proved to be suitable not only for the on-off MDC environment case (data from one channel fully lost), but also for the case where only some packets are lost from one or both channels. Copying video and using motion vectors from correct descriptions are combined together for concealment prior to applying traditional methods. Results are compared to the traditional error concealment method proposed in the H.264 reference software, showing significant improvements for both the balanced and unbalanced channel cases.


eye tracking research & application | 2004

Gaze-contingent display using texture mapping and OpenGL: system and applications

Stavri G. Nikolov; Timothy D. Newman; D.R. Bull; Nishan Canagarajah; Mg Jones; Iain D. Gilchrist

This paper describes a novel gaze-contingent display (GCD) using texture mapping and OpenGL. This new system has a number of key features: (a) it is platform independent, i.e. it runs on different computers and under different operating systems; (b) it is eyetracker independent, since it provides an interactive focus+context display that can be easily integrated with any eye-tracker that provides real-time 2-D gaze estimation; (c) it is flexible in that it provides for straightforward modification of the main GCD parameters, including size and shape of the window and its border; and (d) through the use of OpenGL extensions it can perform local real-time image analysis within the GCD window. The new GCD system implementation is described in detail and some performance figures are given. Several applications of this system are studied, including gaze-contingent multi-resolution displays, gaze-contingent multi-modality displays, and gaze-contingent image analysis.


Journal of The Optical Society of America A-optics Image Science and Vision | 2007

Selection of image fusion quality measures: objective, subjective, and metric assessment

Timothy D. Dixon; Eduardo Fernández Canga; Stavri G. Nikolov; Tom Troscianko; Jan Noyes; C. Nishan Canagarajah; D.R. Bull

Accurate quality assessment of fused images, such as combined visible and infrared radiation images, has become increasingly important with the rise in the use of image fusion systems. We bring together three approaches, applying two objective tasks (local target analysis and global target location) to two scenarios, together with subjective quality ratings and three computational metrics. Contrast pyramid, shift-invariant discrete wavelet transform, and dual-tree complex wavelet transform fusion are applied, as well as levels of JPEG2000 compression. The differing tasks are shown to be more or less appropriate for differentiating among fusion methods, and future directions pertaining to the creation of task-specific metrics are explored.


international conference on information fusion | 2000

2-D image fusion by multiscale edge graph combination

Stavri G. Nikolov; D.R. Bull; Cedric Nishan Canagarajah; M. Halliwell; P.N.T. Wells

A new method for fusion of 2-D images based on combination of graphs formed by their multiscale edges is presented in this paper. The new method uses the multiscale edge representation of images proposed by Mallat and Hwang. The input images are fused using their multiscale edges only. Two different algorithms for fusing the point representations and the chain representations of the multiscale edges (wavelet transform modulus maxima) are described. The chain representation has been found to provide numerous options for image fusion as edge graph fusion techniques can be employed to combine the images. The new fusion scheme is compared to other fusion techniques when applied to out-of-focus images. Two other applications of the new method to remote sensing images and medical (CT and MR) images are also given in the paper.


Information Fusion | 2010

Task-based scanpath assessment of multi-sensor video fusion in complex scenarios

Timothy D. Dixon; Stavri G. Nikolov; John J. Lewis; Jian Li; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; D.R. Bull; C. Nishan Canagarajah

The combining of visible light and infrared visual representations occurs naturally in some creatures, including the rattlesnake. This process, and the wide-spread use of multi-spectral multi-sensor systems, has influenced research into image fusion methods. Recent advances in image fusion techniques have necessitated the creation of novel ways of assessing fused images, which have previously focused on the use of subjective quality ratings combined with computational metric assessment. Previous work has shown the need to apply a task to the assessment process; the current work continues this approach by extending the novel use of scanpath analysis. In our experiments, participants were shown two video sequences, one in high luminance (HL) and one in low luminance (LL), both featuring a group of people walking around a clearing of trees. Each participant was shown visible and infrared (IR) inputs alone; and side-by-side (SBS); in an average (AVE) fused; a discrete wavelet transform (DWT) fused; and a dual-tree complex wavelet transform (DT-CWT) fused displays. Participants were asked to track one individual in each video sequence, as well as responding by key press when other individuals carried out secondary actions. Results showed the SBS display to lead to much poorer accuracy than the other displays, while reaction times in carrying out the secondary task favoured AVE in the HL sequence and DWT in the LL sequence. Results are discussed in relation to previous findings regarding item saliency and task demands, and the potential for comparative experiments evaluating human performance when viewing fused sequences against naturally occurring fusion processes such as the rattlesnake is highlighted.


Spatial Vision | 2007

Assessment of fused videos using scanpaths: a comparison of data analysis methods.

Timothy D. Dixon; Stavri G. Nikolov; John J. Lewis; Jian Li; Eduardo Fernández Canga; Jan Noyes; Tom Troscianko; D.R. Bull; Cedric Nishan Canagarajah

The increased interest in image fusion (combining images of two or more modalities such as infrared and visible light radiation) has led to a need for accurate and reliable image assessment methods. Previous work has often relied upon subjective quality ratings combined with some form of computational metric analysis. However, we have shown in previous work that such methods do not correlate well with how people perform in actual tasks utilising fused images. The current study presents the novel use of an eye-tracking paradigm to record how accurately participants could track an individual in various fused video displays. Participants were asked to track a man in camouflage outfit in various input videos (visible and infrared originals, a fused average of the inputs; and two different wavelet-based fused videos) whilst also carrying out a secondary button-press task. The results were analysed in two ways, once calculating accuracy across the whole video, and by dividing the video into three time sections based on video content. Although the pattern of results depends on the analysis, the accuracy for the inputs was generally found to be significantly worse than that for the fused displays. In conclusion, both approaches have good potential as new fused video assessment methods, depending on what task is carried out.


international conference on consumer electronics | 2012

Gaze location prediction in broadcast football video

Qin Cheng; Dimitris Agrafiotis; Alin Achim; D.R. Bull

In this work we investigate gaze location prediction for the context of broadcast football video. We use Bayesian integration to combine bottom up (color, intensity, orientation, motion, novelty) and top down (ball) features. 70% success rate is obtained for CIF.

Collaboration


Dive into the D.R. Bull's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Noyes

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge