Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitris Agrafiotis is active.

Publication


Featured researches published by Dimitris Agrafiotis.


international conference on electronics circuits and systems | 2003

Perceptually optimised sign language video coding

Dimitris Agrafiotis; Nishan Canagarajah; D.R. Bull

Mobile video telephony will enable deaf people to communicate in their own language, sign language. At low bit rates coding of sign language video is challenging due the high levels of motion and the need to maintain good image quality to aid with understanding. This paper presents perceptually optimised coding of sign language video at low bit rates. The proposed optimisations are based on an eye-tracking study that we have conducted with the aim of characterising the visual attention of sign language viewers. Analysis and results of this study and two coding methods, one using MPEG-4 video objects and the second using foveation filtering, are presented. Results with foveation filtering are promising, offering a considerable decrease in bit rate in a manner compatible with the visual attention patterns of deaf people, as these were recorded in the eye tracking study.


3rd International Symposium on Image and Signal Processing and Analysis, 2003. ISPA 2003. Proceedings of the | 2003

ROI coding of volumetric medical images with application to visualisation

Dimitris Agrafiotis; David R. Bull; Nishan Canagarajah

This paper presents region of interest (ROI) coding of volumetric medical images with the region itself being three dimensional. An extension to 3D-SPIHT which allows 3D ROI coding is proposed. ROI coding enables faster reconstruction of diagnostically useful regions in volumetric datasets by assigning higher priority to them in the bitstream. It also introduces the possibility for increased compression performance, by allowing certain parts of the volume to be coded in a lossy manner while others are coded losslessly. Results presented highlight the benefits of the ROI extension. Additionally, a visualisation specific ROI coding case is examined. Results show the advantages of ROI coding in terms of the quality of the visualised decoded volume.


international conference on image processing | 2006

Multiple Priority Region of Interest Coding with H.264

Dimitris Agrafiotis; David R. Bull; Nishan Canagarajah; Nawat Kamnoonwatana

This paper describes a modified rate control algorithm for H.264 that can accommodate multiple priority levels given a region of interest (Rol). The modified method allows better control of the quality of the RoI and gradual variation of the quality in the rest of the video frame through a bit redistribution process that is based on a number of parameters, including characteristics of the Rol, user input and perceptual factors.


digital television conference | 2007

A Novel H.264/AVC Based Multi-View Video Coding Scheme

Akbar Sheikh Akbari; Nishan Canagarajah; David W. Redmill; Dimitris Agrafiotis

This paper investigates extensions of H.264/AVC for compressing multi-view video sequences. The proposed technique re-sorts frames of sequences captured by multiple cameras looking at a person in a scene from different views and generates a single video sequence. The multi-frame referencing property of the H.264/AVC, which enables exploitation of the spatial and temporal redundancy contained in the multi-view sequences, is employed to implement several modes of operation in the proposed coding algorithm. To evaluate the performance of the proposed coding technique at different modes of operations, five multi-view video sequences at different frame rates were coded using the proposed and the simulcast H.264/AVC coding schemes. Experiments show the superior performance of the proposed coding scheme when coding the multi-view sequences at low and up to half of the original frame rates.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2007

Towards efficient context-specific video coding based on gaze-tracking analysis

Dimitris Agrafiotis; Sam J. C. Davies; Nishan Canagarajah; David R. Bull

This article discusses a framework for model-based, context-dependent video coding based on exploitation of characteristics of the human visual system. The system utilizes variable-quality coding based on priority maps which are created using mostly context-dependent rules. The technique is demonstrated through two case studies of specific video context, namely open signed content and football sequences. Eye-tracking analysis is employed for identifying the characteristics of each context, which are subsequently exploited for coding purposes, either directly or through a gaze prediction model. The framework is shown to achieve a considerable improvement in coding efficiency.


international conference on image processing | 2008

A gaze prediction technique for open signed video content using a track before detect algorithm

Sam J. C. Davies; Dimitris Agrafiotis; Cedric Nishan Canagarajah; David R. Bull

This paper proposes a gaze prediction model for open signed video content. A face detection algorithm is used to locate faces across each frame in both profile and frontal orientations. A grid-based likelihood ratio track before detect routine is used to predict the orientation of the signers head, which allows the gaze location to be localised to either the signer or the inset. The face detections are then used to narrow down the gaze prediction further. The gaze predictor is able to predict the results of an eye tracking study with up to 95% accuracy, and an average accuracy of over 80%.


IEEE Transactions on Multimedia | 2009

A Multicue Bayesian State Estimator for Gaze Prediction in Open Signed Video

Sam J. C. Davies; Dimitris Agrafiotis; C. Nishan Canagarajah; David R. Bull

We propose a multicue gaze prediction framework for open signed video content, the benefits of which include coding gains without loss of perceived quality. We investigate which cues are relevant for gaze prediction and find that shot changes, facial orientation of the signer and face locations are the most useful. We then design a face orientation tracker based upon grid-based likelihood ratio trackers, using profile and frontal face detections. These cues are combined using a grid-based Bayesian state estimation algorithm to form a probability surface for each frame. We find that this gaze predictor outperforms a static gaze prediction and one based on face locations within the frame.


international conference on image processing | 2004

A video coding system for sign language communication at low bit rates

Dimitris Agrafiotis; Nishan Canagarajah; David R. Bull; Jim Kyle; Helen E. Seers; Matthew W.G. Dye

The ability to communicate remotely through the use of video as promised by wireless networks and already practised over fixed networks, is for deaf people as important as voice telephony is for hearing people. Sign languages are visual-spatial languages and as such demand good image quality for interaction and understanding. In this paper, based on analysis of the viewers perceptual behavior and the video content involved we propose a sign language video coding system using foveated processing, which can lead to bit rate savings without compromising the comprehension of the coded sequence. We support this claim with the results of an initial comprehension assessment trial of such coded sequences by deaf users.


international conference on image processing | 2001

Virtual liver biopsy: image processing and 3D visualization

Dimitris Agrafiotis; M.G. Jones; Stavri G. Nikolov; M. Halliwell; D.R. Bull; Nishan Canagarajah

This paper presents results in the image processing and visualization aspects of a virtual liver biopsy system (a system for simulating the medical procedure of liver biopsy). The creation of 3D models from 2D images of the organs involved is described, and segmentation requirements of this process are discussed. Endoscopic images of the liver that simulate the needles point of view are created by means of combined volume and surface rendering. For this purpose ray casting is used with the ray start and end points being constrained within a surface rendered environment. Visualization of the needle insertion process from an exterior point of view is presented. A real-time sectional imaging component is also used in which the displayed 2D-image section of the 3D volume tracks the tip of the needle.


international conference on consumer electronics | 2006

Optimized temporal error concealment through evaluation of multiple concealment features

Dimitris Agrafiotis; D.R. Bull; Nishan Canagarajah

In this paper we formulate an optimized temporal error concealment approach for H.264 based on the study of the performance of several temporal concealment features that apply to the different steps of the proposed method. Specifically we study how the concealment performance is affected by matching error measures, motion vector candidates and estimation enhancements. We show that the resulting formulated method performs significantly better than other state of the art methods. The proposed approach can prove very valuable to the mitigation of errors typically encountered with video transmission over wireless networks.

Collaboration


Dive into the Dimitris Agrafiotis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D.R. Bull

University of Bristol

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jim Kyle

University of Bristol

View shared research outputs
Researchain Logo
Decentralizing Knowledge