Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wei-Sheng Lai is active.

Publication


Featured researches published by Wei-Sheng Lai.


computer vision and pattern recognition | 2017

Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution

Wei-Sheng Lai; Jia-Bin Huang; Narendra Ahuja; Ming-Hsuan Yang

Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.


computer vision and pattern recognition | 2016

A Comparative Study for Single Image Blind Deblurring

Wei-Sheng Lai; Jia-Bin Huang; Zhe Hu; Narendra Ahuja; Ming-Hsuan Yang

Numerous single image blind deblurring algorithms have been proposed to restore latent sharp images under camera motion. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real blurred images. It is thus unclear how these algorithms would perform on images acquired in the wild and how we could gauge the progress in the field. In this paper, we aim to bridge this gap. We present the first comprehensive perceptual study and analysis of single image blind deblurring using real-world blurred images. First, we collect a dataset of real blurred images and a dataset of synthetically blurred images. Using these datasets, we conduct a large-scale user study to quantify the performance of several representative state-of-the-art blind deblurring algorithms. Second, we systematically analyze subject preferences, including the level of agreement, significance tests of score differences, and rationales for preferring one method over another. Third, we study the correlation between human subjective scores and several full-reference and noreference image quality metrics. Our evaluation and analysis indicate the performance gap between synthetically blurred images and real blurred image and sheds light on future research in single image blind deblurring.


computer vision and pattern recognition | 2015

Blur kernel estimation using normalized color-line priors

Wei-Sheng Lai; Jian-Jiun Ding; Yen-Yu Lin; Yung-Yu Chuang

This paper proposes a single-image blur kernel estimation algorithm that utilizes the normalized color-line prior to restore sharp edges without altering edge structures or enhancing noise. The proposed prior is derived from the color-line model, which has been successfully applied to non-blind deconvolution and many computer vision problems. In this paper, we show that the original color-line prior is not effective for blur kernel estimation and propose a normalized color-line prior which can better enhance edge contrasts. By optimizing the proposed prior, our method gradually enhances the sharpness of the intermediate patches without using heuristic filters or external patch priors. The intermediate patches can then guide the estimation of the blur kernel. A comprehensive evaluation on a large image deblurring dataset shows that our algorithm achieves the state-of-the-art results.


computer vision and pattern recognition | 2017

Learning Fully Convolutional Networks for Iterative Non-blind Deconvolution

Jiawei Zhang; Jinshan Pan; Wei-Sheng Lai; Rynson W. H. Lau; Ming-Hsuan Yang

In this paper, we propose a fully convolutional network for iterative non-blind deconvolution. We decompose the non-blind deconvolution problem into image denoising and image deconvolution. We train a FCNN to remove noise in the gradient domain and use the learned gradients to guide the image deconvolution step. In contrast to the existing deep neural network based methods, we iteratively deconvolve the blurred images in a multi-stage framework. The proposed method is able to learn an adaptive image prior, which keeps both local (details) and global (structures) information. Both quantitative and qualitative evaluations on the benchmark datasets demonstrate that the proposed method performs favorably against state-of-the-art algorithms in terms of quality and speed.


IEEE Transactions on Visualization and Computer Graphics | 2018

Semantic-Driven Generation of Hyperlapse from 360 Degree Video

Wei-Sheng Lai; Yujia Huang; Neel Joshi; Christopher James Buehler; Ming-Hsuan Yang; Sing Bing Kang

We present a system for converting a fully panoramic (360 degree) video into a normal field-of-view (NFOV) hyperlapse for an optimal viewing experience. Our system exploits visual saliency and semantics to non-uniformly sample in space and time for generating hyperlapses. In addition, users can optionally choose objects of interest for customizing the hyperlapses. We first stabilize an input 360 degree video by smoothing the rotation between adjacent frames and then compute regions of interest and saliency scores. An initial hyperlapse is generated by optimizing the saliency and motion smoothness followed by the saliency-aware frame selection. We further smooth the result using an efficient 2D video stabilization approach that adaptively selects the motion model to generate the final hyperlapse. We validate the design of our system by showing results for a variety of scenes and comparing against the state-of-the-art method through a large-scale user study.


european conference on computer vision | 2018

Learning Blind Video Temporal Consistency

Wei-Sheng Lai; Jia-Bin Huang; Oliver Wang; Eli Shechtman; Ersin Yumer; Ming-Hsuan Yang

Applying image processing algorithms independently to each frame of a video often leads to undesired inconsistent results over time. Developing temporally consistent video-based extensions, however, requires domain knowledge for individual tasks and is unable to generalize to other applications. In this paper, we present an efficient approach based on a deep recurrent network for enforcing temporal consistency in a video. Our method takes the original and per-frame processed videos as inputs to produce a temporally consistent video. Consequently, our approach is agnostic to specific image processing algorithms applied to the original video. We train the proposed network by minimizing both short-term and long-term temporal losses as well as a perceptual loss to strike a balance between temporal coherence and perceptual similarity with the processed frames. At test time, our model does not require computing optical flow and thus achieves real-time speed even for high-resolution videos. We show that our single model can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition. Extensive objective evaluation and subject study demonstrate that the proposed approach performs favorably against the state-of-the-art methods on various types of videos.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2018

Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks

Wei-Sheng Lai; Jia-Bin Huang; Narendra Ahuja; Ming-Hsuan Yang


international conference on image processing | 2018

GENERATING A PERSPECTIVE IMAGE FROM A PANORAMIC IMAGE BY THE SWUNG-TO-CYLINDER PROJECTION

Che-Han Chang; Wei-Sheng Lai; Yung-Yu Chuang


computer vision and pattern recognition | 2018

Learning a Discriminative Prior for Blind Image Deblurring

Lerenhan Li; Jinshan Pan; Wei-Sheng Lai; Changxin Gao; Nong Sang; Ming-Hsuan Yang


computer vision and pattern recognition | 2018

Deep Semantic Face Deblurring

Ziyi Shen; Wei-Sheng Lai; Tingfa Xu; Jan Kautz; Ming-Hsuan Yang

Collaboration


Dive into the Wei-Sheng Lai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yung-Yu Chuang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Jinshan Pan

University of California

View shared research outputs
Top Co-Authors

Avatar

Zhe Hu

University of California

View shared research outputs
Top Co-Authors

Avatar

Che-Han Chang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Jiawei Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Rynson W. H. Lau

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge