M.R. Civanlar
Koç University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by M.R. Civanlar.
IEEE Transactions on Circuits and Systems for Video Technology | 2007
Engin Kurutepe; M.R. Civanlar; A.M. Tekalp
We present a novel client-driven multiview video streaming system that allows a user to watch 3D video interactively with significantly reduced bandwidth requirements by transmitting a small number of views selected according to his/her head position. The users head position is tracked and predicted into the future to select the views that best match the users current viewing angle dynamically. Prediction of future head positions is needed so that views matching the predicted head positions can be prefetched in order to account for delays due to network transport and stream switching. The system allocates more bandwidth to the selected views in order to render the current viewing angle. Highly compressed, lower quality versions of some other views are also prefetched for concealment if the current user viewpoint differs from the predicted viewpoint. An objective measure based on the abruptness of the head movements and delays in the system is introduced to determine the number of additional lower quality views to be prefetched. The proposed system makes use of multiview coding (MVC) and scalable video coding (SVC) concepts together to obtain improved compression efficiency while providing flexibility in bandwidth allocation to the selected views. Rate-distortion performance of the proposed system is demonstrated under different experimental conditions.
signal processing and communications applications conference | 2006
Istemi Ekin Akkus; M.R. Civanlar; Oznur Ozkasap
A new peer-to-peer architecture for multipoint video conferencing that targets end points with low bandwidth network connections (single video in and out) is presented. It enables end points to create a multipoint conference without any additional networking and computing resources than what is needed for a point-to-point conference. The new architecture is based on layered video coding with two layers at the end points. It allows each conference participant to see any other participant at any given time under all multipoint configurations of any number of users, with a caveat that some participants may have to receive only the base layer video. Layered encoding techniques usable within this architecture are described. A protocol for implementation of the new approach has been developed and simulated. Its performance is analyzed.
international conference on image processing | 2005
Emrah Akyol; A.M. Tekalp; M.R. Civanlar
Multiple description video coding mitigates the effects of packet losses introduced by congestion and/or bit errors. In this paper, we propose a novel multiple description video coding technique, based on fully scalable wavelet video coding, which allows post encoding adaptation of the number of descriptions, the redundancy level of each description, and bitrate of each description by manipulation of the encoded bitstream. We demonstrate that the proposed method provides excellent coding efficiency, outperforming most other multiple description methods proposed so far. We also provide experimental results to show that varying the number of descriptions according to network conditions is superior to using a fixed number of descriptions, by means of NS-2 network simulation of a peer-to-peer video streaming system.
IEEE Transactions on Multimedia | 2007
Tanir Ozcelebi; A.M. Tekalp; M.R. Civanlar
We propose a new pre-roll delay-distortion optimization (DDO) framework that allows determination of the minimum pre-roll delay and distortion while ensuring continuous playback for on-demand content-adaptive video streaming over limited bitrate networks. The input video is first divided into temporal segments, which are assigned a relevance weight and a maximum distortion level, called relevance-distortion policy, which may be specified by the user. The system then encodes the input video according to the specified relevance-distortion policy, whereby the optimal spatial and temporal resolutions and quantization parameters, also called encoding parameters, are selected for each temporal segment. The optimal encoding parameters are computed using a novel, multi-objective optimization formulation, where a relevance weighted distortion measure and pre-roll delay are jointly minimized under maximum allowable buffer size, continuous playback, and maximum allowable distortion constraints. The performance of the system has been demonstrated for on-demand streaming of soccer videos with substantial improvement in the weighted distortion without any increase in pre-roll delay over a very low-bitrate network using AVC/H.264 encoding
signal processing and communications applications conference | 2006
Selen Pehlivan; Anil Aksay; Cagdas Bilen; Gozde Bozdagi Akar; M.R. Civanlar
Today, stereoscopic and multi-view video are among the popular research areas in the multimedia world. In this study, we have designed a platform consisting of stereo-view capturing, real time transmission and display. At the display stage, end users view video in 3D by using polarized glasses. Stereoscopic video is compressed in an efficient way by using stereoscopic video coding techniques and streamed using real time protocols on the sender side. Receiver can view the content of the video built from multiple channels as mono or stereo depending on its display and bandwidth capabilities. The entire system is built by modifying available open source systems whenever possible
international conference on image processing | 2005
Tanir Ozcelebi; M.R. Civanlar; A.M. Tekalp
We present a new channel adaptive stream switching solution for variable bitrate video transmission, where the receiver buffer status is used for making switching decisions and instantaneous transmission rate is determined by means of explicit channel feedback or TCP friendly rate control. Receiver buffer status is used in selecting from a set of pre-encoded bitstreams avoiding buffer underflows and overflows. A pre-roll delay to buffer data at the receiving side is necessary in order to compensate for variations in the channel throughput and the encoding bitrate. For each stream, content dependent video coding parameters are chosen by means of dynamic programming such that the maximum overall video quality and minimum pre-roll delay are achieved for a finite number of transmission rates. Experimental results show that buffer violations are successfully avoided using the proposed stream switching framework as opposed to a regular non-adaptive streaming case.
signal processing and communications applications conference | 2005
Engin Kurutepe; M.R. Civanlar; A.M. Tekalp
Contemporary television and video experience is not interactive and users have little or no choice over their viewing angle in the scenes they watch. There is a demand for a real 3-D interactive experience which would allow users to view scenes through virtual cameras chosen by their head and eye locations as in real life. However, among other issues the amount of bandwidth required to transmit very large Image Based Rendering (IBR) representations of the scene to end users is still an unsolved problem. In this paper we propose a novel networking scheme to enable users to automatically stream only the parts of the light field representation, which will be used to render the current viewpoint. The proposed system also incorporates prediction of future views to prefetch streams, which are likely to be needed in the near future as the viewpoint changes over time.
signal processing and communications applications conference | 2004
E. Akyol; A. Murat; M.R. Civanlar
We propose an adaptive motion-compensated temporal filtering (MCTF) structure to provide efficient temporal scalability within the H.264/AVC video compression standard. MCTF has traditionally been considered within fully scalable wavelet video coders. However, motion-compensated simple 5/3 lifted temporal wavelet filtering suffers at scene changes, as well as occlusion regions. We note that the bi-directional motion compensation mode in the H.264 standard is best equipped with the state of the art adaptive features such as adaptive block size, overlapped block motion compensation, mode switching between forward, backward and bidirectional prediction, and in-loop deblocking filter. Hence, we propose a GOP structure to implement block-based adaptive MCTF within the H.264 syntax using stored B-pictures, similar to the motion-compensated 5/3 wavelet filtering. We provide experimental results to compare the results of our proposed codec with those of other scalable wavelet video coders which use MCTF. It is also possible to employ the proposed adaptive MCTF structure within fully scalable wavelet video codecs.
global communications conference | 2005
T. Ozgelebi; M.O. Sunay; A.M. Tekalp; M.R. Civanlar
In wireless packet transmission systems, it is crucial to provide fairness in service while maximizing user utility and channel throughput. This is possible via intelligent allocation of system resources. For real time video applications, a pre-roll time for pre-fetching data at the client buffer is needed in order to compensate for channel variations that cause client buffer under/overflows, hence facilitating continuous playout of the video. In this paper, a novel multiple objective optimized (MOO) opportunistic multiple access scheme for optimal scheduling of users in a 1xEV-DO (IS-856) system is presented. At each time slot, the user that experiences the best compromise between the least buffer occupancy and the best channel condition is served. Experiments conducted in ITU Pedestrian A and Vehicular B environments show that our algorithm treats each user fairly while its channel throughput performance is very close to the ideal case, in which the user with the best channel characteristics is always served with no buffer constraints
international conference on multimedia and expo | 2006
Selen Pehlivan; Anil Aksay; Cagdas Bilen; Gozde Bozdagi Akar; M.R. Civanlar
Today, stereoscopic and multi-view video are among the popular research areas in the multimedia world. In this study, we have designed and built a platform consisting of stereo-view capturing, real-time transmission and display. At the display stage, end users view video in 3D by using polarized glasses. Multi-view video is compressed in an efficient way by using multi-view video coding techniques and streamed using standard real-time transport protocols. The entire system is built by modifying available open source systems whenever possible. Receiver can view the content of the video built from multiple channels as mono or stereo depending on its display and bandwidth capabilities