Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Deepak S. Turaga is active.

Publication


Featured researches published by Deepak S. Turaga.


Signal Processing-image Communication | 2004

No reference PSNR estimation for compressed pictures

Deepak S. Turaga; Yingwei Chen; Jorge E. Caviedes

Many user-end applications require an estimate of the quality of coded video or images without having access to the original, i.e. a no-reference quality metric. Furthermore, in many such applications the compressed video bitstream is also not available. This paper describes methods for using the statistical properties of intra coded video data to estimate the quantization error caused by compression without accessing either the original pictures or the bitstream. We derive closed form expressions for the quantization error in coding schemes based on the discrete cosine transform and block based coding. A commonly used quality metric, the peak signal to noise ratio (PSNR) is subsequently computed from the estimated quantization error. Since quantization error is the most significant loss incurred during typical coding schemes, the estimated PSNR, or any PSNR-based quality metric may be used to gauge the overall quality of the pictures.


international conference on image processing | 2003

Multiple description scalable coding using wavelet-based motion compensated temporal filtering

M. van der Schaar; Deepak S. Turaga

Packet delay jitter and loss due to network congestion pose significant challenges for designing and deploying delay sensitive multimedia applications over the best effort packet switched networks such as the Internet. Recent studies indicate that using multiple descriptions coding (MDC) in conjunction with path or server diversity can mitigate these effects. However, the proposed MDC coding and streaming techniques are based on non-scalable coding techniques. A key disadvantages of these techniques is that they can only improve the error resilience of the transmitted video, but are not able to address two other important challenges associated with the robust transmission of video over unreliable networks: adaptation to bandwidth variations and receiving device characteristics. In this paper, we present a new paradigm, referred to as multiple description scalable coding (MDSC), that is able to address all the previously mentioned challenges by combining the advantages of scalable coding and MDC. This framework enables tradeoffs between throughput, redundancy and complexity at transmission time, unlike previous non-scalable MDC schemes. Furthermore, we also propose a novel MDSC scheme based on motion compensated temporal filtering (MCTF), denominated multiple description motion compensated temporal filtering (MD-MCTF). We use the inherent ability of current MCTF schemes, using the lifting implementation of temporal filtering. We show how tradeoffs between throughput, redundancy and complexity can easily be achieved by adaptively partitioning the video into several descriptions after MCTF. Based on our simulations using different network conditions, the proposed MD-MCTF framework outperforms existing MDC schemes over a variety of network conditions.


IEEE Transactions on Circuits and Systems for Video Technology | 2005

Complexity scalable motion compensated wavelet video encoding

Deepak S. Turaga; M. van der Schaar; Béatrice Pesquet-Popescu

We present a framework for the systematic analysis of video encoding complexity, measured in terms of the number of motion estimation (ME) computations, that we illustrate on motion compensated wavelet video coding schemes. We demonstrate the graceful complexity scalability of these schemes through the modification of the spatiotemporal decomposition structure and the ME parameters, and the use of spatiotemporal prediction. We generate a wide range of rate-distortion-complexity (R-D-C) operating points for different sequences, by modifying these options. Using our analytical framework we derive closed form expressions for the number of ME computations for these different coding modes and show that they accurately capture the computational complexity independent of the underlying content characteristics. Our framework for complexity analysis can be combined with rate-distortion modeling to determine the encoding structure and parameters for optimal R-D-C tradeoffs.


IEEE Transactions on Circuits and Systems for Video Technology | 2002

Model-based error concealment for wireless video

Deepak S. Turaga; Tsuhan Chen

We introduce model-based schemes for error concealment of networked video. We build appearance models for specific objects in the scene and use these models to replenish any lost information. Due to the models being designed specifically for the object, they are able to capture the statistical variations in the object appearance more effectively, thereby leading to better error-concealment performance. We examine statistical modeling techniques from the literature and introduce a new efficient and accurate linear model for data representation, called the mixture of principal components (MPC), and use these models for error concealment. We simulate lossy network conditions and show that these model-based concealment schemes outperform traditional concealment schemes across a variety of loss probabilities and bit rates for the coded video.


IEEE Transactions on Circuits and Systems for Video Technology | 2001

Estimation and mode decision for spatially correlated motion sequences

Deepak S. Turaga; Tsuhan Chen

There are several parts of the video-encoding process that can be optimized for high quality, high compression ratio, and fast encoding speed. We focus on some of these encoder optimization issues. We first study motion estimation, a computationally intensive part of encoding, and propose efficient algorithms based on exploiting spatially correlated motion information. We extend our motion estimation algorithms to find motion vectors for sub-parts of a block, as allowed for in the H.263 and MPEG-4 standards, and propose an efficient two-tier decision strategy for the one-four motion-vector decision. We introduce a new measure of block similarity that can improve the inter-intra mode decision significantly. Furthermore, for certain motion-estimation strategies, this new measure can also improve the bit rate resulting from a better motion estimation. Another issue that we address is the estimation of delta motion vectors in the H.263 and MPEG-4 standards. We have implemented these optimizations in a H.263 framework and the results are encouraging.


international conference on acoustics, speech, and signal processing | 2003

Unconstrained motion compensated temporal filtering (UMCTF) framework for wavelet video coding

M. van der Schaar; Deepak S. Turaga

The paper presents a new framework for adaptive temporal filtering in wavelet interframe codecs, called unconstrained motion compensated temporal filtering (UMCTF). This framework allows flexible and efficient temporal filtering by combining the best features of motion compensation, used in predictive coding, with the advantages of interframe scalable wavelet video coding schemes. UMCTF provides higher coding efficiency, improved visual quality and flexibility of temporal and spatial scalability, higher coding efficiency and lower decoding delay than conventional MCTF schemes. Furthermore, UMCTF can also be employed in alternative open-loop scalable coding frameworks using DCT for the texture coding.


international conference on acoustics, speech, and signal processing | 2003

Content-adaptive filtering in the UMCTF framework

Deepak S. Turaga; M. van der Schaar

Unconstrained motion compensated temporal filtering (UMCTF) is a very general and flexible framework for temporal filtering. It allows the selection of many different filters as well as decomposition structures to allow easy adaptation to video content, bandwidth variations, complexity requirements, and in conjunction with embedded coding can provide spatio-temporal-SNR scalability. In this paper we demonstrate the content-adaptive filter selection provided within the UMCTF framework. We show improvements in coding efficiency as well as in decoded visual quality using content-adaptive filters, at different granularities.


Signal Processing-image Communication | 2005

Unconstrained motion compensated temporal filtering (UMCTF) for efficient and flexible interframe wavelet video coding

Deepak S. Turaga; M. van der Schaar; Yiannis Andreopoulos; Adrian Munteanu; Peter Schelkens

We introduce an efficient and flexible framework for temporal filtering in wavelet-based scalable video codecs called unconstrained motion compensated temporal filtering (UMCTF). UMCTF allows for the use of different filters and temporal decomposition structures through a set of controlling parameters that may be easily modified during the coding process, at different granularities and levels. The proposed framework enables the adaptation of the coding process to the video content, network and end-device characteristics, allows for enhanced scalability, content-adaptivity and reduced delay, while improving the coding efficiency as compared to state-of-the-art motion-compensated wavelet video coders. Additionally, a mechanism for the control of the distortion variation in video coding based on UMCTF employing only the predict step is proposed. The control mechanism is formulated by expressing the distortion in an arbitrary decoded frame, at any temporal level in the pyramid, as a function of the distortions in the reference frames at the same temporal level. All the different scenarios proposed in the paper are experimentally validated through a coding scheme that incorporates advanced features (such as rate-distortion optimized variable block-size multihypothesis prediction and overlapped block motion compensation). Experiments are carried out to determine the relative efficiency of different UMCTF instantiations, as well as to compare against the current state-of-the-art in video coding.


IEEE Transactions on Multimedia | 2001

Classification based mode decisions for video over network

Deepak S. Turaga; Tsuhan Chen

A video encoder has to make many mode decisions in order to achieve the goal of a low bit rate, high quality, and fast implementation. We propose a general classification based approach to making such mode decisions accurately and efficiently. We first illustrate the approach using the Intra-Inter coding mode decision. We then focus on the decision to skip or code a frame for rate control of video over networks. Using the classification used approach we show improvement in the rate-distortion sense. We then extend the work to scalable video coding in choosing between scalability modes and examine the performance of our approach over error prone networks, using simulated packet losses.


international conference on image processing | 2002

Face recognition using mixtures of principal components

Deepak S. Turaga; Tsuhan Chen

We introduce an efficient statistical modeling technique called mixture of principal components (MPC). This model is a linear extension to the traditional principal component analysis (PCA) and uses a mixture of eigenspaces to capture data variations. We use the model to capture face appearance variations due to pose and lighting changes. We show that this more efficient modeling leads to improved face recognition performance.

Collaboration


Dive into the Deepak S. Turaga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohamed Alkanhal

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Casasent

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge