Safak Dogan
University of Surrey
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Safak Dogan.
IEEE Journal of Selected Topics in Signal Processing | 2009
Chaminda T. E. R. Hewage; S. Worrall; Safak Dogan; Stephane Villette; Ahmet M. Kondoz
In the near future, many conventional video applications are likely to be replaced by immersive video to provide a sense of ldquobeing there.rdquo This transition is facilitated by the recent advancement of 3D capture, coding, transmission, and display technologies. Stereoscopic video is the simplest form of 3D video available in the literature. ldquoColor plus depth maprdquo based stereoscopic video has attracted significant attention, as it can reduce storage and bandwidth requirements for the transmission of stereoscopic content over communication channels. However, quality assessment of coded video sequences can currently only be performed reliably using expensive and inconvenient subjective tests. To enable researchers to optimize 3D video systems in a timely fashion, it is essential that reliable objective measures are found. This paper investigates the correlation between subjective and objective evaluation of color plus depth video. The investigation is conducted for different compression ratios, and different video sequences. Transmission over Internet protocol (IP) is also investigated. Subjective tests are performed to determine the image quality and depth perception of a range of differently coded video sequences, with packet loss rates ranging from 0% to 20%. The subjective results are used to determine more accurate objective quality assessment metrics for 3D color plus depth video.
IEEE Transactions on Circuits and Systems for Video Technology | 2002
Safak Dogan; Akin Cellatoglu; Mustafa Uyguroglu; Abdul H. Sadka; Ahmet M. Kondoz
A novel fully comprehensive mobile video communications system is proposed. The system exploits the useful rate management features of video transcoders and combines them with error resilience for the transmission of coded video streams over general packet radio service (GPRS) mobile-access networks. The error-resilient video transcoding operation takes place at a centralized point, referred to as a video proxy, which provides the necessary output transmission rates with the required amount of robustness. With the use of this proposed algorithm, error resilience can be added to an already compressed video stream at an intermediate stage at the edge of two or more different networks through two resilience schemes, namely the adaptive intra refresh (AIR) and feedback control signaling (FCS) methods. Both resilience tools impose an output rate increase which can also be prevented with the proposed novel technique. Thus, the presented scheme gives robust video outputs at near target transmission rates that only require the same number of GPRS timeslots as non-resilient schemes. Moreover, an ultimate robustness is also accomplished with the combination of the two resilience algorithms at the video proxy. Extensive computer simulations demonstrate the effectiveness of the proposed system.
international conference on multimedia and expo | 2008
Chaminda T. E. R. Hewage; S. Worrall; Safak Dogan; Ahmet M. Kondoz
Stereoscopic video is one of the simplest forms of multi view video, which can be easily adapted for communication applications. Much current research is based on colour and depth map stereoscopic video, due to its reduced bandwidth requirements and backward compatibility. Existing immersive media research is more focused on application processing than aspects related to transfer of immersive content over communication channels. As video over packet networks is affected by missing frames, caused by packet loss, this paper proposes a frame concealment method for colour and depth map based stereoscopic video. The proposed method exploits the motion correlation of colour and depth map image sequences. The colour motion information is reused for prediction during depth map coding. The redundant motion information is then used to conceal transmission errors at the decoder. The experimental results show that the proposed frame concealment scheme performs better than applying error concealment for colour and depth map video separately in a range of packet error conditions.
Signal Processing-image Communication | 2009
H. Kodikara Arachchi; X. Perramon; Safak Dogan; Ahmet M. Kondoz
Data encryption is one of the key information security technologies used for safeguarding multimedia content from unauthorised access and manipulation in end-to-end delivery and access chains. This technology, combined with the appropriate cryptographic methods, effectively prevents the content against malicious attacks, so as to protect its authenticity as well as integrity. While encryption-based security is ensuring the authorised consumption of the multimedia content, content adaptation technologies have the primary goal of providing means for wider dissemination of the content across diverse networks, devices and users, and thus enriching user satisfaction and experience of the delivered content within a given set of usage environment constraints. Traditionally, protected contents can only be adapted at trusted adaptation engines residing between the source and end-users, since they have to be fully decrypted before performing the necessary adaptation operations. The drawback of such a process is that it significantly limits the availability and flexibility of adaptation engines applicable for adapting protected contents on the fly. Thus, this paper proposes a novel scalable H.264/advanced video coding (AVC)-compatible video encryption technique, which is also transparent to adaptation engines in an end-to-end video delivery scenario. The proposed technology relies on keeping syntax elements required for performing the adaptation operations clear (i.e., not encrypted). The effectiveness of the proposed technique has been successfully verified in scenarios, where both conventional joint scalable video model (JSVM) bit stream extracting and random packet dropping mechanisms are used.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2010
Gokce Nur; Safak Dogan; H. Kodikara Arachchi; Ahmet M. Kondoz
Despite the burgeoning advances in 3D video technologies, 3D video adaptation is still in its infancy. 3D video has a multidimensional nature. To support the growth of 3D adaptation technologies, key factors that characterise this nature should be thoroughly studied. Spatial resolution is one of the important factors characterising this nature. Thus, particularly the effect of different spatial resolutions of depth maps on video quality and depth perception should be investigated. Similarly, the influence of the content related factors on the perception of different spatial resolutions of the depth maps should also be investigated. Therefore, evaluation studies are conducted using spatially scaled depth maps encoded at different bit rates and their original colour texture counterparts. The depth maps utilised in the evaluations present different spatial and motion complexity characteristics. The results demonstrate that when the spatial resolutions of the depth maps increase, both the video quality and depth perception improve. The amount of improvement is highly correlated with the structural and motion complexity characteristics of the depth maps.
2012 19th International Packet Video Workshop (PV) | 2012
V. De Silva; H. Kodikara Arachchi; Erhan Ekmekcioglu; Anil Fernando; Safak Dogan; Ahmet M. Kondoz; S. Sedef Savas
It is well known that when the two eyes are provided with two views of different resolutions the overall perception is dominated by the high resolution view. This property, known as binocular suppression, is effectively used to reduce the bit rate required for stereoscopic video delivery, where one view of the stereo pair is encoded at a much lower quality than the other. There have been significant amount of effort in the recent past to measure the just noticeable level of asymmetry between the two views, where asymmetry is achieved by encoding views at two quantization levels. However, encoding artifacts introduce both blurring and blocking artifacts in to the stereo views, which are perceived differently by the human visual system. Therefore, in this paper, we design a set of psycho-physical experiments to measure the just noticeable level of asymmetric blur at various spatial frequencies, luminance contrasts and orientations. The subjective results suggest that humans could tolerate a significant amount of asymmetry introduced by blur, and the level of tolerance is independent of the spatial frequency or luminance contrast. Furthermore, the results of this paper illustrate that when asymmetry is introduced by unequal quantization, the just noticeable level of asymmetry is driven by the blocking artifacts. In general, stereoscopic asymmetry introduced by way of asymmetric blurring is preferred over asymmetric compression. It is expected that the subjective results of this paper will have important use cases in objective measurement of stereoscopic video quality and asymmetric compression and processing of stereoscopic video.
international symposium on circuits and systems | 2009
Chaminda T. E. R. Hewage; Z. Ahmad; S. Worrall; Safak Dogan; W.A.C. Fernando; Ahmet M. Kondoz
In this paper, an Unequal Error Protection (UEP) scheme for the transmission of 3-D (Three - Dimensional) video over WiMAX communication channel is proposed. The colour plus depth map stereoscopic video is coded with backward compatibility using a Scalable Video Coding (SVC) architecture, where users with conventional video decoders/receivers can receive the conventional 2-D (Three - Dimensional) video stream whereas users with SVC decoders/receivers and necessary 3-D video displays may render 3-D video. The proposed error protection scheme is based on the perceptual importance of the coded 3-D video components. The UEP method allocates more protection to the colour video packets than the depth map packets in order to receive good quality 2-D/3-D video. The protection levels are assigned through allocating differentiated transmission power for colour and depth map video packets during transmission. On the fly power allocation is based on the estimated distortion of the colour image sequence. The objective and perceptual quality evaluations show that the proposed UEP scheme improves the quality of 2-D video while achieving pleasing quality for 3-D viewers.
IEEE MultiMedia | 2010
Anna Carreras; Jaime Delgado; Eva Rodríguez; Vitor Barbosa; Maria Teresa Andrade; Hemantha Kodikara Arachchi; Safak Dogan; Ahmet M. Kondoz
This article presents a scalable and modular platform for contextaware adaptation of multimedia content that is governed by digital rights management (DRM). This platform adopts several new approaches, such as (1) combining the use of ontologies and lowlevel context to drive the adaptation decision process, (2) verifying and enforcing usage rights within the adaptation operations, and (3) incorporating multifaceted adaptation tools to provide a wide range of on-the-fly and on-demand adaptation operations to suit various dynamic requirements.
IEEE Wireless Communications | 2014
Asimakis Lykourgiotis; Konstantinos Birkos; Tasos Dagiuklas; Erhan Ekmekcioglu; Safak Dogan; Yasin Yildiz; Ilias Politis; Guven Orkun Tanik; Burak Demirtas; Ahmet M. Kondoz; Stavros A. Kotsopoulos
This article proposes a converged broadcast and broadband platform in order to deliver 3D media to both mobile and fixed users with guaranteed minimum quality of experience (QoE). The work presented offers an ideal business model for operators having both digital video broadcast and Internet Protocol (IP)-based media services. To that end, the DVB and peer-to-peer Internet technologies will be combined to provide sufficient resources for supporting high-bandwidth high-quality 3D multiview video. The motivations behind combining these technologies are outlined with an emphasis on their complementary characteristics. In addition, the overall design of the proposed architecture is presented focusing on the protocols that are exploited to achieve the interworking of the underlying technologies. Moreover, innovative key techniques for supporting both fixed and mobile users in an efficient manner are introduced.
IEEE Journal of Selected Topics in Signal Processing | 2017
Chathura Galkandage; Janko Calic; Safak Dogan; Jean-Yves Guillemaut
Stereoscopic imaging is becoming increasingly popular. However, to ensure the best quality of experience, there is a need to develop more robust and accurate objective metrics for stereoscopic content quality assessment. Existing stereoscopic image and video metrics are either extensions of conventional 2-D metrics (with added depth or disparity information) or are based on relatively simple perceptual models. Consequently, they tend to lack the accuracy and robustness required for stereoscopic content quality assessment. This paper introduces full-reference stereoscopic image and video quality metrics based on a human visual system (HVS) model incorporating important physiological findings on binocular vision. The proposed approach is based on the following three contributions. First, it introduces a novel HVS model extending previous models to include the phenomena of binocular suppression and recurrent excitation. Second, an image quality metric based on the novel HVS model is proposed. Finally, an optimized temporal pooling strategy is introduced to extend the metric to the video domain. Both image and video quality metrics are obtained via a training procedure to establish a relationship between subjective scores and objective measures of the HVS model. The metrics are evaluated using publicly available stereoscopic image/video databases as well as a new stereoscopic video database. An extensive experimental evaluation demonstrates the robustness of the proposed quality metrics. This indicates a considerable improvement with respect to the state-of-the-art with average correlations with subjective scores of 0.86 for the proposed stereoscopic image metric and 0.89 and 0.91 for the proposed stereoscopic video metrics.