Erhan Ekmekcioglu
Loughborough University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erhan Ekmekcioglu.
IEEE Transactions on Image Processing | 2013
V. De Silva; Hemantha Kodikara Arachchi; Erhan Ekmekcioglu; Ahmet M. Kondoz
The quality assessment of impaired stereoscopic video is a key element in designing and deploying advanced immersive media distribution platforms. A widely accepted quality metric to measure impairments of stereoscopic video is, however, still to be developed. As a step toward finding a solution to this problem, this paper proposes a full reference stereoscopic video quality metric to measure the perceptual quality of compressed stereoscopic video. A comprehensive set of subjective experiments is performed with 14 different stereoscopic video sequences, which are encoded using both the H.264 and high efficiency video coding compliant video codecs, to develop a subjective test results database of 116 test stimuli. The subjective results are analyzed using statistical techniques to uncover different patterns of subjective scoring for symmetrically and asymmetrically encoded stereoscopic video. The subjective result database is subsequently used for training and validating a simple but effective stereoscopic video quality metric considering heuristics of binocular vision. The proposed metric performs significantly better than state-of-the-art stereoscopic image and video quality metrics in predicting the subjective scores. The proposed metric and the subjective result database will be made publicly available, and it is expected that the proposed metric and the subjective assessments will have important uses in advanced 3D media delivery systems.
IEEE Journal of Selected Topics in Signal Processing | 2011
D.V.S.X. De Silva; Erhan Ekmekcioglu; W.A.C. Fernando; S. Worrall
This paper addresses the sensitivity of human vision to spatial depth variations in a 3-D video scene, seen on a stereoscopic display, based on an experimental derivation of a just noticeable depth difference (JNDD) model. The main target is to exploit the depth perception sensitivity of humans in suppressing the unnecessary spatial depth details, hence reducing the transmission overhead allocated to depth maps. Based on the JNDD model derived, depth map sequences are preprocessed to suppress the depth details that are not perceivable by the viewers and to minimize the rendering artefacts that arise due to optical noise, where the optical noise is triggered by the inaccuracies in the depth estimation process. Theoretical and experimental evidences are provided to illustrate that the proposed depth adaptive preprocessing filter does not alter the 3-D visual quality or the view synthesis quality for free-viewpoint video applications. Experimental results suggest that the bit rate for depth map coding can be reduced up to 78% for the depth maps captured with depth-range cameras and up to 24% for the depth maps estimated with computer vision algorithms, without affecting the 3-D visual quality or the arbitrary view synthesis quality.
IEEE Journal of Selected Topics in Signal Processing | 2011
Erhan Ekmekcioglu; Vladan Velisavljevic; S. Worrall
Depth map estimation is an important part of the multi-view video coding and virtual view synthesis within the free viewpoint video applications. However, computing an accurate depth map is a computationally complex process, which makes real-time implementation challenging. Alternatively, a simple estimation, though quick and promising for real-time processing, might result in inconsistent multi-view depth map sequences. To exploit this simplicity and to improve the quality of depth map estimation, we propose a novel content adaptive enhancement technique applied to the previously estimated multi-view depth map sequences. The enhancement method is locally adapted to edges, motion and depth-range of the scene to avoid blurring the synthesized views and to reduce the computational complexity. At the same time, and very importantly, the method enforces consistency across the spatial, temporal and inter-view dimensions of the depth maps so that both the coding efficiency and the quality of the synthesized views are improved. We demonstrate these improvements in the experiments, where the enhancement method is applied to several multi-view test sequences and the obtained synthesized views are compared to the views synthesized using other methods in terms of both numerical and perceived visual quality.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2008
Erhan Ekmekcioglu; S. Worrall; Ahmet M. Kondoz
In this paper, the potential for improving the compression efficiency of multi-view video coding with depth information is explored. The proposed technique uses downsampling prior to encoding, for arbitrary views and depth maps. A bit-rate adaptive downscaling-ratio decision approach is proposed for certain views and depth maps prior to encoding. Colour and depth videos are considered separately due to their different characteristics and effects on synthesized free view-point videos. The inter-view references, if present, are downsampled to the same resolution as the input video to be coded. The results for several multi-view with depth sequences indicate that using bit-rate adaptive mixed spatial resolution coding for both views and depth maps can achieve savings in bit-rate, compared to full resolution and fixed depth-to-colour ratio multi-view coding when the quality of synthesized viewpoints are considered. The computational complexity in the encoder is significantly reduced at the same time, since the number of blocks coded is reduced, and hence the number of block mode decisions carried out is reduced.
international conference on image processing | 2009
Erhan Ekmekcioglu; Marta Mrak; S. Worrall; Ahmet M. Kondoz
In this paper we propose a novel video object edge adaptive upsampling scheme for application in video-plus-depth and Multi-View plus Depth (MVD) video coding chains with reduced resolution. Proposed scheme is for improving the rate-distortion performance of reduced-resolution depth map coders taking into account the rendering distortion induced in free-viewpoint videos. The inherent loss in fine details due to downsampling, particularly at video object boundaries causes significant visual artefacts in rendered free-viewpoint images. The proposed edge adaptive upsampling filter allows the conservation and better reconstruction of such critical object boundaries. Furthermore, the proposed scheme does not require the edge information to be communicated to the decoder, as the edge information used in the adaptive upsampling is derived from the reconstructed colour video. Test results show that as much as 1.2 dB gain in free-viewpoint video quality can be achieved with the utilization of the proposed method compared to the scheme that uses the linear MPEG re-sampling filter. The proposed approach is suitable for video-plus-depth as well as MVD applications, in which it is critical to satisfy bandwidth constraints while maintaining high free-viewpoint image quality.
international conference on image processing | 2010
D.V.S.X. De Silva; W.A.C. Fernando; Gokce Nur; Erhan Ekmekcioglu; S. Worrall
The ability to provide a realistic perception of depth is the core added functionality of modern 3D video display systems. At present, there is no standard method to assess the perception of depth in 3D video. Existence of such methods would immensely enhance the progression of 3D video research. This paper focuses on the depth perception assessment in color plus depth representation of 3D video. In this paper, we subjectively evaluate the depth perceived by the users on an auto stereoscopic display, and analyze its variation with the impairments introduced during the compression of the depth images. The variation of the subjective perception of depth is explained based on another evaluation that is carried out to identify the Just Noticeable Difference in Depth (JNDD) perceived by the subjects. The JNDD corresponds to the sensitivity of the observers to the changes in depth in a 3D video scene. Even though only the effects of compression artifacts are considered in this paper, the proposed assessment technique, based on the JNDD values can be used in any future depth perception assessment work.
IEEE Transactions on Circuits and Systems for Video Technology | 2009
Erhan Ekmekcioglu; S. Worrall; Ahmet M. Kondoz
In this letter, a new method is proposed for multiview depth map compression. It is intended to skip some parts of certain depth map viewpoints without encoding and to just predict those skipped parts by exploiting the multiview correspondences and some flags transmitted. It is targeted to save the bit rate allocated for depth map sequences to a great extent. Multiview correspondences are exploited for each skipped depth map frame by making use of the depth map frames belonging to neighboring views and captured at the same time instant. A prediction depth map frame is constructed block by block on a free viewpoint qualitywise selective basis from a couple of candidate predictors generated through the implicit and explicit usage of the 3-D scene geometry. Especially at lower bit rates, dropping higher temporal layers of certain depth map viewpoints and replacing them with corresponding predictors generated using the proposed multiview aided approach save a great amount of bit rate for those depth map viewpoints. At the same time, the perceived quality of the reconstructed stereoscopic videos is maintained, which is proved through a set of subjective tests.
2012 19th International Packet Video Workshop (PV) | 2012
V. De Silva; H. Kodikara Arachchi; Erhan Ekmekcioglu; Anil Fernando; Safak Dogan; Ahmet M. Kondoz; S. Sedef Savas
It is well known that when the two eyes are provided with two views of different resolutions the overall perception is dominated by the high resolution view. This property, known as binocular suppression, is effectively used to reduce the bit rate required for stereoscopic video delivery, where one view of the stereo pair is encoded at a much lower quality than the other. There have been significant amount of effort in the recent past to measure the just noticeable level of asymmetry between the two views, where asymmetry is achieved by encoding views at two quantization levels. However, encoding artifacts introduce both blurring and blocking artifacts in to the stereo views, which are perceived differently by the human visual system. Therefore, in this paper, we design a set of psycho-physical experiments to measure the just noticeable level of asymmetric blur at various spatial frequencies, luminance contrasts and orientations. The subjective results suggest that humans could tolerate a significant amount of asymmetry introduced by blur, and the level of tolerance is independent of the spatial frequency or luminance contrast. Furthermore, the results of this paper illustrate that when asymmetry is introduced by unequal quantization, the just noticeable level of asymmetry is driven by the blocking artifacts. In general, stereoscopic asymmetry introduced by way of asymmetric blurring is preferred over asymmetric compression. It is expected that the subjective results of this paper will have important use cases in objective measurement of stereoscopic video quality and asymmetric compression and processing of stereoscopic video.
Multimedia Tools and Applications | 2016
Cagri Ozcinar; Erhan Ekmekcioglu; Janko ăźAlić; Ahmet M. Kondoz
The increase in Internet bandwidth and the developments in 3D video technology have paved the way for the delivery of 3D Multi-View Video (MVV) over the Internet. However, large amounts of data and dynamic network conditions result in frequent network congestion, which may prevent video packets from being delivered on time. As a consequence, the 3D video experience may well be degraded unless content-aware precautionary mechanisms and adaptation methods are deployed. In this work, a novel adaptive MVV streaming method is introduced which addresses the future generation 3D immersive MVV experiences with multi-view displays. When the user experiences network congestion, making it necessary to perform adaptation, the rate-distortion optimum set of views that are pre-determined by the server, are truncated from the delivered MVV streams. In order to maintain high Quality of Experience (QoE) service during the frequent network congestion, the proposed method involves the calculation of low-overhead additional metadata that is delivered to the client. The proposed adaptive 3D MVV streaming solution is tested using the MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) standard. Both extensive objective and subjective evaluations are presented, showing that the proposed method provides significant quality enhancement under the adverse network conditions.
picture coding symposium | 2009
Erhan Ekmekcioglu; Vladan Velisavljevic; S. Worrall
We present a novel multi-view depth map enhancement method deployed as a post-processing of initially estimated depth maps, which are incoherent in the temporal and inter-view dimensions. The proposed method is based on edge and motion-adaptive median filtering and allows for an improved quality of virtual view synthesis. To enforce the spatial, temporal and inter-view coherence in the multiview depth maps, the median filtering is applied to 4-dimensional windows that consist of the spatially neighbor depth map values taken at different viewpoints and time instants. These windows have locally adaptive shapes in a presence of edges or motion to preserve sharpness and realistic rendering. We show that our enhancement method leads to a reduction of a coding bit-rate required for representation of the depth maps and also to a gain in the quality of synthesized views at an arbitrary virtual viewpoint. At the same time, the method carries a low additional computational complexity.