Sophie Triantaphillidou
University of Westminster
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sophie Triantaphillidou.
electronic imaging | 2008
Maria Orfanidou; Sophie Triantaphillidou; Elizabeth Allen
The paper is focused on the implementation of a modular color image difference model, as described in [1], with aim to predict visual magnitudes between pairs of uncompressed images and images compressed using lossy JPEG and JPEG 2000. The work involved programming each pre-processing step, processing each image file and deriving the error map, which was further reduced to a single metric. Three contrast sensitivity function implementations were tested; a Laplacian filter was implemented for spatial localization and the contrast masked-based local contrast enhancement method, suggested by Moroney, was used for local contrast detection. The error map was derived using the CIEDE2000 color difference formula on a pixel-by-pixel basis. A final single value was obtained by calculating the median value of the error map. This metric was finally tested against relative quality differences between original and compressed images, derived from psychophysical investigations on the same dataset. The outcomes revealed a grouping of images which was attributed to correlations between the busyness of the test scenes (defined as image property indicating the presence or absence of high frequencies) and different clustered results. In conclusion, a method for accounting for the amount of detail in test is required for a more accurate prediction of image quality.
Proceedings of SPIE | 2013
Anastasia Tsifouti; Sophie Triantaphillidou; Efthimia Bilissi; Mohamed-Chaker Larabi
The objective of this investigation is to produce recommendations for acceptable bit-rates of CCTV footage of people onboard London buses. The majority of CCTV recorders on buses use a proprietary format based on the H.264/AVC video coding standard, exploiting both spatial and temporal redundancy. Low bit-rates are favored in the CCTV industry but they compromise the image usefulness of the recorded imagery. In this context usefulness is defined by the presence of enough facial information remaining in the compressed image to allow a specialist to identify a person. The investigation includes four steps: 1) Collection of representative video footage. 2) The grouping of video scenes based on content attributes. 3) Psychophysical investigations to identify key scenes, which are most affected by compression. 4) Testing of recording systems using the key scenes and further psychophysical investigations. The results are highly dependent upon scene content. For example, very dark and very bright scenes were the most challenging to compress, requiring higher bit-rates to maintain useful information. The acceptable bit-rates are also found to be dependent upon the specific CCTV system used to compress the footage, presenting challenges in drawing conclusions about universal ‘average’ bit-rates.
electronic imaging | 2008
R. A. Smith; K. MacLennan-Brown; J. F. Tighe; N. Cohen; Sophie Triantaphillidou; L. W. MacDonald
Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging chain to extract accurate colour information from CCTV recordings. A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and display. The response of each of these stages to colour scene information was characterised by measuring its response to a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV applications, common compression schemes and representative displays were also characterised.
Proceedings of SPIE | 2014
Sophie Triantaphillidou; John R. Jarvis; Gaurav Gupta
This paper describes continuing research concerned with the measurement and modeling of human spatial contrast sensitivity and discrimination functions, using complex pictorial stimuli. The relevance of such functions in image quality modeling is also reviewed. Previously1,2 we presented the choice of suitable contrast metrics, apparatus and laboratory set-up, the stimuli acquisition and manipulation, the methodology employed in the subjective tests and initial findings. Here we present our experimental paradigm, the measurement and modeling of the following visual response functions: i) Isolated Contrast Sensitivity Function (iCSF); Contextual Contrast Sensitivity Function (cCSF); Isolated Visual Perception Function (iVPF); Contextual Visual Perception Function (cVPF). Results indicate that the measured cCSFs are lower in magnitude than the iCSFs and flatter in profile. Measured iVPFs, cVPFs and cCSFs are shown to have similar profiles. Barten’s contrast detection model3 was shown to successfully predict iCSF. For a given frequency band, the reduction, or masking of cCSF compared with iCSF sensitivity is predicted from the linear amplification model (LAM)4. We also show that our extension of Barten’s contrast discrimination model1,5 is capable of describing iVPFs and cVPFs. We finally reflect on the possible implications of the measured and modeled profiles of cCSF and cVPF to image quality modeling.
Proceedings of SPIE | 2014
Elizabeth Allen; Sophie Triantaphillidou; Ralph E. Jacobson
This investigation examines the relationships between image fidelity, acceptability thresholds and scene content for images distorted by lossy compression. Scene characteristics of a sample set of images, with a wide range of representative scene content, were quantified, using simple measures (scene metrics), which had been previously found to correlate with global scene lightness, global contrast, busyness, and colorfulness. Images were compressed using the lossy JPEG 2000 algorithm to a range of compression ratios, progressively introducing distortion to levels beyond the threshold of detection. Twelve observers took part in a paired comparison experiment to evaluate the perceptibility threshold compression ratio. A further psychophysical experiment was conducted using the same scenes, compressed to higher compression ratios, to identify the level of compression at which the images became visually unacceptable. Perceptibility and acceptability thresholds were significantly correlated for the test image set; both thresholds also correlated with the busyness metric. Images were ranked for the two thresholds and were further grouped, based upon the relationships between perceptibility and acceptability. Scene content and the results from the scene descriptors were examined within the groups to determine the influence of specific common scene characteristics upon both thresholds.
Proceedings of SPIE | 2010
Kyung Hoon Oh; Sophie Triantaphillidou; Ralph E. Jacobson
Psychophysical image quality assessments have shown that subjective quality depended upon the pictorial content of the test images. This study is concerned with the nature of scene dependency, which causes problems in modeling and predicting image quality. This paper focuses on scene classification to resolve this issue and used K-means clustering to classify test scenes. The aim was to classify thirty two original test scenes that were previously used in a psychophysical investigation conducted by the authors, according to their susceptibility to sharpness and noisiness. The objective scene classification involved: 1) investigation of various scene descriptors, derived to describe properties that influence image quality, and 2) investigation of the degree of correlation between scene descriptors and scene susceptibility parameters. Scene descriptors that correlated with scene susceptibility in sharpness and in noisiness are assumed to be useful in the objective scene classification. The work successfully derived three groups of scenes. The findings indicate that there is a potential for tackling the problem of sharpness and noisiness scene susceptibility when modeling image quality. In addition, more extensive investigations of scene descriptors would be required at global and local image levels in order to achieve sufficient accuracy of objective scene classification.
Proceedings of SPIE | 2009
Jae Young Park; Sophie Triantaphillidou; Ralph E. Jacobson
This paper describes an investigation of changes in image appearance when images are viewed at different image sizes on a high-end LCD device. Two digital image capturing devices of different overall image quality were used for recording identical natural scenes with a variety of pictorial contents. From each capturing device, a total of sixty four captured scenes, including architecture, nature, portraits, still and moving objects and artworks under various illumination conditions and recorded noise level were selected. The test set included some images where camera shake was purposefully introduced. An achromatic version of the image set that contained only lightness information was obtained by processing the captured images in CIELAB space. Rank order experiments were carried out to determine which image attribute(s) were most affected when the displayed image size was altered. These evaluations were carried out for both chromatic and achromatic versions of the stimuli. For the achromatic stimuli, attributes such as contrast, brightness, sharpness and noisiness were rank-ordered by the observers in terms of the degree of change. The same attributes, as well as hue and colourfulness, were investigated for the chromatic versions of the stimuli. Results showed that sharpness and contrast were the two most affected attributes with changes in displayed image size. The ranking of the remaining attributes varied with image content and illumination conditions. Further, experiments were carried out to link original scene content to the attributes that changed mostly with changes in image size.
Proceedings of SPIE | 2014
Jae Young Park; Sophie Triantaphillidou; Ralph E. Jacobson
An evaluation of the change in perceived image contrast with changes in displayed image size was carried out. This was achieved using data from four psychophysical investigations, which employed techniques to match the perceived contrast of displayed images of five different sizes. A total of twenty-four S-shape polynomial functions were created and applied to every original test image to produce images with different contrast levels. The objective contrast related to each function was evaluated from the gradient of the mid-section of the curve (gamma). The manipulation technique took into account published gamma differences that produced a just-noticeable-difference (JND) in perceived contrast. The filters were designed to achieve approximately half a JND, whilst keeping the mean image luminance unaltered. The processed images were then used as test series in a contrast matching experiment. Sixty-four natural scenes, with varying scene content acquired under various illumination conditions, were selected from a larger set captured for the purpose. Results showed that the degree of change in contrast between images of different sizes varied with scene content but was not as important as equivalent perceived changes in sharpness 1.
Proceedings of SPIE | 2013
Sophie Triantaphillidou; John R. Jarvis; Gaurav Gupta
The aim of our research is to specify experimentally and further model spatial frequency response functions, which quantify human sensitivity to spatial information in real complex images. Three visual response functions are measured: the isolated Contrast Sensitivity Function (iCSF), which describes the ability of the visual system to detect any spatial signal in a given spatial frequency octave in isolation, the contextual Contrast Sensitivity Function (cCSF), which describes the ability of the visual system to detect a spatial signal in a given octave in an image and the contextual Visual Perception Function (VPF), which describes visual sensitivity to changes in suprathreshold contrast in an image. In this paper we present relevant background, along with our first attempts to derive experimentally and further model the VPF and CSFs. We examine the contrast detection and discrimination frameworks developed by Barten, which we find provide a sound starting position for our own modeling purposes. Progress is presented in the following areas: verification of the chosen model for detection and discrimination; choice of contrast metrics for defining contrast sensitivity; apparatus, laboratory set-up and imaging system characterization; stimuli acquisition and stimuli variations; spatial decomposition; methodology for subjective tests. Initial iCSFs are presented and compared with ‘classical’ findings that have used simple visual stimuli, as well as with more recent relevant work in the literature.
Optics and Photonics for Counterterrorism, Crime Fighting and Defence IX; and Optical Materials and Biomaterials in Security and Defence Systems Technology X | 2013
Sophie Triantaphillidou; John R. Jarvis; Gaurav Gupta; H. Rana
Shape, form and detail define image structure in our visual world. These attributes are dictated primarily by local variations in luminance contrast. Defining human contrast sensitivity (threshold of contrast perception) and contrast discrimination (ability to differentiate between variations in contrast) directly from real complex scenes is of outermost relevance to our understanding of spatial vision. The design and evaluation of imaging equipment, used in both field operations and security applications, require a full description of strengths and limitations of human spatial vision. This paper is concerned with the measurement of the following four human contrast sensitivity functions directly from images of complex scenes: i) Isolated Contrast Sensitivity (detection) Function (iCSF); ii) Contextual Contrast Sensitivity (detection) Function (cCSF); iii) Isolated Visual Perception (discrimination) Function (iVPF) and iv) Contextual Visual Perception (discrimination) Function (cVPF). The paper also discusses the following areas: Barten’s mathematical framework for modeling contrast sensitivity and discrimination; spatial decomposition of image stimuli to a number of spatial frequency bands (octaves); suitability of three different relevant image contrast metrics; experimental methodology for subjective tests; stimulus conditions. We finally present and discuss initial findings for all four measured sensitivities.