Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Colin Doutre is active.

Publication


Featured researches published by Colin Doutre.


IEEE Consumer Electronics Magazine | 2012

HEVC: The New Gold Standard for Video Compression: How Does HEVC Compare with H.264/AVC?

Mahsa T. Pourazad; Colin Doutre; Maryam Azimi; Panos Nasiopoulos

Digital video has become ubiquitous in our everyday lives; everywhere we look, there are devices that can display, capture, and transmit video. The recent advances in technology have made it possible to capture and display video material with ultrahigh definition (UHD) resolution. Now is the time when the current Internet and broadcasting networks do not even have sufficient capacity to transmit large amounts of HD content-Let alone UHD. The need for an improved transmission system is more pronounced in the mobile sector because of the introduction of lightweight HD resolutions (such as 720 pixel) for mobile applications. The limitations of current technologies prompted the International Standards Organization/International Electrotechnical Commission Moving Picture Experts Group (MPEG) and International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts Group (VCEG) to establish the Joint Collaborative Team on Video Coding (JCT-VC), with the objective to develop a new high-performance video coding standard.


IEEE Transactions on Circuits and Systems for Video Technology | 2009

Color Correction Preprocessing for Multiview Video Coding

Colin Doutre; Panos Nasiopoulos

In multiview video, a number of cameras capture the same scene from different viewpoints. There can be significant variations in the color of views captured with different cameras, which negatively affects performance when the videos are compressed with inter-view prediction. In this letter, a method is proposed for correcting the color of multiview video sets as a preprocessing step to compression. Unlike previous work, where one of the captured views is used as the color reference, we correct all views to match the average color of the set of views. Block-based disparity estimation is used to find matching points between all views in the video set, and the average color is calculated for these matching points. A least-squares regression is performed for each view to find a function that will make the view most closely match the average color. Experimental results show that when multiview video is compressed with joint multiview video model, the proposed method increases compression efficiency by up to 1.0 dB in luma peak signal-to-noise ratio (PSNR) compared to compressing the original uncorrected video.


IEEE Transactions on Circuits and Systems for Video Technology | 2008

H.264-Based Compression of Bayer Pattern Video Sequences

Colin Doutre; Panos Nasiopoulos; Konstantinos N. Plataniotis

Most consumer digital cameras use a single light sensor which captures color information using a color filter array (CFA). This produces a mosaic image, where each pixel location contains a sample of only one of three colors, either red, green or blue. The two missing colors at each pixel location must be interpolated from the surrounding samples in a process called demosaicking. The conventional approach to compressing video captured with these devices is to first perform demosaicking and then compress the resulting full-color video using standard methods. In this paper two methods for compressing CFA video prior to demosaicking are proposed. In our first method, the CFA video is directly compressed with the H.264 video coding standard in 4:2:2 sampling mode. Our second method uses a modified version of H.264, where motion compensation is altered to take advantage of the properties of CFA data. Simulations show both proposed methods give better compression efficiency than the demosaick-first approach at high bit rates, and thus are suitable for applications, such as digital camcorders, where high quality video is required.


IEEE Transactions on Visualization and Computer Graphics | 2011

Correction of Clipped Pixels in Color Images

Di Xu; Colin Doutre; Panos Nasiopoulos

Conventional images store a very limited dynamic range of brightness. The true luma in the bright area of such images is often lost due to clipping. When clipping changes the R, G, B color ratios of a pixel, color distortion also occurs. In this paper, we propose an algorithm to enhance both the luma and chroma of the clipped pixels. Our method is based on the strong chroma spatial correlation between clipped pixels and their surrounding unclipped area. After identifying the clipped areas in the image, we partition the clipped areas into regions with similar chroma, and estimate the chroma of each clipped region based on the chroma of its surrounding unclipped region. We correct the clipped R, G, or B color channels based on the estimated chroma and the unclipped color channel(s) of the current pixel. The last step involves smoothing of the boundaries between regions of different clipping scenarios. Both objective and subjective experimental results show that our algorithm is very effective in restoring the color of clipped pixels.


international conference on image processing | 2009

Fast vignetting correction and color matching for panoramic image stitching

Colin Doutre; Panos Nasiopoulos

When images are stitched together to form a panorama there is often color mismatch between the source images due to vignetting and differences in exposure and white balance between images. In this paper a low complexity method is proposed to correct vignetting and differences in color between images, producing panoramas that look consistent across all source images. Unlike most previous methods which require complex non-linear optimization to solve for correction parameters, our method requires only linear regressions with a low number of parameters, resulting in a fast, computationally efficient method. Experimental results show the proposed method effectively removes vignetting effects and produces images that are highly visually consistent in color and brightness.


international conference on digital signal processing | 2011

Subjective evaluation of tone-mapping methods on 3D images

Zicong Mai; Colin Doutre; Panos Nasiopoulos; Rabab K. Ward

High dynamic range (HDR) imaging provides superior picture quality to traditional 8 bit, low dynamic range (LDR), image representations. Capturing images/videos in HDR format can avoid problems with over and under exposures. Tone-mapping is a process that converts from HDR to LDR, so that HDR content can be shown on existing displays. Tone mapping has been extensively studied in the context of 2D images/video but not for 3D content. This paper addresses the problem of presenting 3D HDR content on stereoscopic LDR displays and presents a subjective psychophysical experiment that evaluates existing tone-mapping operators on 3D HDR images. The results show that 3D content derived using tone-mapping is much preferred to that is captured directly with a pair of LDR cameras. Global tone-mapping methods (which better preserve global contrast) are found to produce images with better 3D effect than local tone-mapping operators (which produce images with high amounts of detail/texture). Also, the brightness of the tone-mapped images is found to be highly collated with perceived 3D quality.


international conference on acoustics, speech, and signal processing | 2008

Motion vector prediction for improving one bit transform based motion estimation

Colin Doutre; Panos Nasiopoulos

One bit transforms (1BT) have been proposed for lowering the complexity of motion estimation (ME) in video coding. These transforms generate a one bit representation of each pixel in the video that is used in the motion search. This approach can greatly reduce the silicon area and power required for hardware based video encoding. However 1BT methods under-perform traditional Sum of absolute differences (SAD) based motion estimation, particularly for smaller block sizes. In this paper, it is proposed to improve 1BT based ME by predicting the motion vector for each block based on the vectors from previous blocks and modifying the cost function to favor motion vectors close to the predicted one. This takes advantage of the spatial correlation between motion vectors and produces a more uniform motion field. Simulation results show the proposed method can improve the PSNR of frames reconstructed through motion compensation by up to 1 dB and substantially improve the subjective video quality by reducing blocking artifacts.


international conference on image processing | 2009

Modified H.264 intra prediction for compression of video and images captured with a color filter array

Colin Doutre; Panos Nasiopoulos

Most consumer digital cameras capture color information with a single light sensor and a color filter array (CFA). In these cameras, only one color sample (red, green or blue) is captured at each pixel location. This paper presents a modified H.264 intra prediction scheme for compressing image and video data captured with a color filter array. The H.264 intra prediction modes are modified for the green channel to account for the fact that the green data is not sampled in a rectangular manner in the Bayer pattern, the most popular CFA design. The proposed method increases the compression efficiency of I frames on the green channel by up to 1.2 dB.


IEEE Journal of Selected Topics in Signal Processing | 2012

Rendering 3-D High Dynamic Range Images: Subjective Evaluation of Tone-Mapping Methods and Preferred 3-D Image Attributes

Zicong Mai; Colin Doutre; Panos Nasiopoulos; Rabab K. Ward

High dynamic range (HDR) images provide superior picture quality by allowing a larger range of brightness levels to be captured and reproduced than traditional 8-bit low dynamic range (LDR) images. Even with existing 8-bit displays, picture quality can be significantly improved if content is first captured in HDR format, and then is tone-mapped to convert it from HDR to the LDR format. Tone mapping methods have been extensively studied for 2-D images. This paper addresses the problem of presenting stereoscopic tone-mapped HDR images on 3-D LDR displays and how it is different from the 2-D scenario. We first present a subjective psychophysical experiment that evaluates existing tone-mapping operators on 3-D HDR images. The results show that 3-D content derived using tone-mapping is much preferred to that captured directly with a pair of LDR cameras. Global (spatially invariant) and local (spatially variant) tone-mapping methods have similar 3-D effects. The second part of our study focuses on how the preferred level of brightness and the preferred amount of details differ between 3-D and 2-D images by conducting another set of subjective experiments. Our results show that while people selected slightly brighter images in 3-D viewing compared to 2-D, the difference is not statistically significant. However, compared to 2-D images, the subjects consistently preferred having a greater amount of details when watching 3-D. These results suggest that 3-D content should be prepared differently (sharper and possibly slightly brighter) from the same content intended for 2-D displaying, to achieve optimal appearance in each format. The complete database of the original HDR image pairs and their LDR counterparts are available online.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2011

Optimized contrast reduction for crosstalk cancellation in 3D displays

Colin Doutre; Panos Nasiopoulos

Subtractive crosstalk cancelation is an effective way to reduce the appearance of ghosting in 3D displays. However, effective cancelation requires the black level of the input images to be raised above zero, which reduces the image contrast and visual quality. Previous methods for selecting the raised black level do not consider the image content; they are either based on the worst case or they do not guarantee complete crosstalk cancelation. Previous methods also scale the red, green and blue channels independently, which results in images with washed out colors. This paper provides two contributions; first we derive the minimum amount that the black level has to be raised when using linear scaling in RGB space to ensure crosstalk can be fully cancelled out for a particular image. Second we propose that instead of scaling the images in RGB space, to scale the luma channel in YCbCr color space while keeping the chroma values constant to better preserve color. We also derive the minimum amount that the luma range has to be compressed to ensure that crosstalk can be fully canceled out. Experimental results show that our methods produce images with better color and contrast compared to scaling the RGB channels based on the worst case, while still guaranteeing crosstalk can be fully canceled out.

Collaboration


Dive into the Colin Doutre's collaboration.

Top Co-Authors

Avatar

Panos Nasiopoulos

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Di Xu

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Mahsa T. Pourazad

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Rabab K. Ward

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maryam Azimi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Zicong Mai

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge