Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Gallant is active.

Publication


Featured researches published by Michael Gallant.


IEEE Transactions on Circuits and Systems for Video Technology | 1998

H.263+: video coding at low bit rates

Guy Côté; Berna Erol; Michael Gallant; Faouzi Kossentini

We discuss the ITU-T H.263+ (or H.263 Version 2) low-bit-rate video coding standard. We first describe, briefly, the H.263 standard including its optional modes. We then address the 12 new negotiable modes of H.263+. Next, we present experimental results for these modes, based on our public-domain implementation (http:ilspmg.ece.ubc.ca). Tradeoffs among compression performance, complexity, and memory requirements for the H.263+ optional modes are discussed. Finally, results for mode combinations are presented.


IEEE Transactions on Circuits and Systems for Video Technology | 2001

Rate-distortion optimized layered coding with unequal error protection for robust Internet video

Michael Gallant; Faouzi Kossentini

We present an effective framework for increasing the error-resilience of low bit-rate video communications over an error-prone packet-switched network. Our framework is based on the principle of layered coding with transport prioritization. We introduce a rate-distortion optimized mode-selection algorithm for our prioritized layered framework. This algorithm is based on a joint source/channel-coding approach and trades off source coding efficiency for increased bitstream error-resilience to optimize the video coding mode selection within and across layers. The algorithm considers the channel conditions, as well as the error recovery and concealment capabilities, of the channel codec and source decoder, respectively. Important framework parameters including the packetization scheme, decoder error concealment method, and channel codec error-protection strength are considered. The effects of mismatch between the parameters employed by the encoder and the actual channel conditions are considered. Results are presented for a wide range of packet loss rates in order to illustrate the benefits of the proposed framework.


IEEE Transactions on Image Processing | 1999

An efficient computation-constrained block-based motion estimation algorithm for low bit rate video coding

Michael Gallant; Guy Côté; Faouzi Kossentini

We present an efficient computation constrained block-based motion vector estimation algorithm for low bit rate video coding that offers good tradeoffs between motion estimation distortion and number of computations. A reliable predictor determines the search origin. An efficient search pattern exploits structural constraints within the motion field. A flexible cost measure used to terminate the search allows simultaneous control of the motion estimation distortion and the computational cost. Experimental results demonstrate the viability of the proposed algorithm in low bit rate video coding applications, achieving essentially the same levels of rate-distortion performance and subjective quality as that of the full search algorithm when used by the UBC H.263+ video coding reference software. However the proposed motion estimation algorithm provides substantially higher encoding speed as well as graceful computational degradation capabilities.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Affine Motion Prediction Based on Translational Motion Vectors

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

In all of the video coding standards like H.26X and MPEG-X, much of the compression comes from motion compensated prediction (MCP). Translational motion vectors (MVs) poorly model complex motion and thus coders using polynomial or affine MVs have been proposed in the past. In this paper, we demonstrate a novel affine predictor stage which can be easily incorporated into current codecs greatly increasing MCP quality. If used passively to generate the final prediction, gains of up to 0.7 and 1.6 dB were easily realized for ldquomobilerdquo and ldquoflower gardenrdquo video sequences, respectively. In addition, when the translational MVs are refined, gains of up to 0.98 and 1.88 dB for ldquomobilerdquo and ldquoflower gardenrdquo video sequences were respectively realized.


data compression conference | 1998

The H.263+ video coding standard: complexity and performance

Berna Erol; Michael Gallant; Guy Côté; Faouzi Kossentini

The emerging ITU-T H.263+ low bit-rate video coding standard is version 2 of the draft international standard ITU-T H.263. In this paper, we discuss this emerging video coding standard and present compression performance results based on our public domain implementation of H.263+.


ieee global conference on signal and information processing | 2014

Study of the effects of stalling events on the quality of experience of mobile streaming videos

Deepti Ghadiyaram; Alan C. Bovik; Hojatollah Yeganeh; Roman C. Kordasiewicz; Michael Gallant

We have created a new mobile video database that models distortions caused by network impairments. In particular, we simulate stalling events and startup delays in over-the-top (OTT) mobile streaming videos. We describe the way we simulated diverse stalling events to create a corpus of distorted videos and the human study we conducted to obtain subjective scores. We also analyzed the ratings to understand the impact of several factors that influence the quality of experience (QoE). To the best of our knowledge, ours is the most comprehensive and diverse study on the effects of stalling events on QoE. We are making the database publicly available [1] in order to help advance state-of-the-art research on user-centric mobile network planning and management.


international conference on image processing | 1999

Efficient scalable DCT-based video coding at low bit rates

Michael Gallant; F. Kossetini

It is well-known that flexibility and error resilience are significantly improved by employing a scalable bit stream. The major drawback of multi-layered representations within a motion compensated (MC) discrete cosine transform (DCT) based framework is the increase in bit rate as compared to a single-layered representation having the same frequency, spatial and temporal resolution as in the highest layer of the multi-layered representation. Using rate-distortion (RD) optimization techniques, we can improve the compression efficiency of MC-DCT based SNR and spatially scalable video coding framework. We first show how RD optimization techniques can be applied independently for each layer. We then extend the framework to consider coding decisions jointly across layers.


international conference on image processing | 2014

Delivery quality score model for Internet video

Hojatollah Yeganeh; Roman C. Kordasiewicz; Michael Gallant; Deepti Ghadiyaram; Alan C. Bovik

The vast majority of todays internet video services are consumed over-the-top (OTT) via reliable streaming (HTTP via TCP), where the primary noticeable delivery-related impairments are startup delay and stalling. In this paper we introduce an objective model called the delivery quality score (DQS) model, to predict users QoE in the presence of such impairments. We describe a large subjective study that we carried out to tune and validate this model. Our experiments demonstrate that the DQS model correlates highly with the subjective data and that it outperforms other emerging models.


IEEE Transactions on Multimedia | 2007

Encoding of Affine Motion Vectors

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

An affine motion model provides better motion representation than a translational motion model. Therefore, it is a good candidate for advanced video compression algorithms, requiring higher compression efficiency than current algorithms. One disadvantage of the affine motion model is the increased number of motion vector parameters, therefore increased motion vector bit rate. We develop and analyze several simulation based approaches of entropy coding for orthonormalized affine motion vector (AMV) coefficients, by considering various context-types and coders. In our work we expand the traditional idea of a context type by introducing four new context types. We compare our method of contexts-type and coder selection with context quantization. The best of our contexts-type and coder solutions produces 4% to 15% average AMV bit-rate reductions over the original VLC approach. For more difficult content AMV bit rate reduction up to 26% is reported.


IEEE Transactions on Circuits and Systems for Video Technology | 2007

Modeling Quantization of Affine Motion Vector Coefficients

Roman C. Kordasiewicz; Michael Gallant; Shahram Shirani

Affine motion compensated prediction (AMCP) is an advanced tool which may be incorporated into future video compression standards. There are numerous coders already using AMCP . However the increased number of motion vector components is a disadvantage and quantizing these components can have significant consequences on the difference macro blocks (DMBs). This paper examines the quantization of affine motion vector (AMV) coefficients, by deriving a quadratic relationship between DMB energy and AMV quantization step size. Mathematical derivations and simulations are provided, including two literature comparisons demonstrating the benefits of this work. In the first comparison, the quantization of orthogonalized AMVs in is compared with quantization guided by the novel quadratic model. In the second comparison, Nokias MVC coder is modified to use the quadratic model to generate quantization step sizes for various granularities; sequence, frame, and quarter-frame, demonstrating up to 8.7% bit rate reductions. Model driven AMV quantization step size choices are shown to be very close to and even outperform limited exhaustive search AMV quantization step size choices, at a quarter of the computational cost

Collaboration


Dive into the Michael Gallant's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Faouzi Kossentini

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guy Côté

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan C. Bovik

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Deepti Ghadiyaram

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge