Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shao-Ping Lu is active.

Publication


Featured researches published by Shao-Ping Lu.


IEEE Transactions on Visualization and Computer Graphics | 2013

Timeline Editing of Objects in Video

Shao-Ping Lu; Song-Hai Zhang; Jin Wei; Shi-Min Hu; Ralph Robert Martin

We present a video editing technique based on changing the timelines of individual objects in video, which leaves them in their original places but puts them at different times. This allows the production of object-level slow motion effects, fast motion effects, or even time reversal. This is more flexible than simply applying such effects to whole frames, as new relationships between objects can be created. As we restrict object interactions to the same spatial locations as in the original video, our approach can produce high-quality results using only coarse matting of video objects. Coarse matting can be done efficiently using automatic video object segmentation, avoiding tedious manual matting. To design the output, the user interactively indicates the desired new life spans of objects, and may also change the overall running time of the video. Our method rearranges the timelines of objects in the video whilst applying appropriate object interaction constraints. We demonstrate that, while this editing technique is somewhat restrictive, it still allows many interesting results.


IEEE Transactions on Multimedia | 2015

Spatio-Temporally Consistent Color and Structure Optimization for Multiview Video Color Correction

Shao-Ping Lu; Beerend Ceulemans; Adrian Munteanu; Peter Schelkens

When compared to conventional 2-D video, multiview video can significantly enhance the visual 3-D experience in 3-D applications by offering horizontal parallax. However, when processing images originating from different views, it is common that the colors between the different cameras are not well- calibrated . To solve this problem, a novel energy function -based color correction method for multiview camera setups is proposed to enforce that colors are as close as possible to those in the reference image but also that the overall structural information is well-preserved. The proposed system introduces a spatio-temporal correspondence matching method to ensure that each pixel in the input image gets bijectively mapped to a reference pixel. By combining this mapping with the original structural information, we construct a global optimization algorithm in a Laplacian matrix formulation and solve it using a sparse matrix solver. We further introduce a novel forward-reverse objective evaluation model to overcome the problem of lack of ground truth in this field. The visual comparisons are shown to outperform state-of-the-art multiview color correction methods, while the objective evaluation reports PSNR gains of up to 1.34 dB and SSIM gains of up to 3.2%, respectively.


Computational Visual Media | 2015

Color retargeting: Interactive time-varying color image composition from time-lapse sequences

Shao-Ping Lu; Guillaume Dauphin; Gauthier Lafruit; Adrian Munteanu

In this paper, we present an interactive static image composition approach, namely color retargeting, to flexibly represent time-varying color editing effect based on time-lapse video sequences. Instead of performing precise image matting or blending techniques, our approach treats the color composition as a pixel-level resampling problem. In order to both satisfy the user’s editing requirements and avoid visual artifacts, we construct a globally optimized interpolation field. This field defines from which input video frames the output pixels should be resampled. Our proposed resampling solution ensures that (i) the global color transition in the output image is as smooth as possible, (ii) the desired colors/objects specified by the user from different video frames are well preserved, and (iii) additional local color transition directions in the image space assigned by the user are also satisfied. Various examples have been shown to demonstrate that our efficient solution enables the user to easily create time-varying color image composition results.


international conference on digital signal processing | 2013

Depth-based view synthesis using pixel-level image inpainting

Shao-Ping Lu; Jan Hanca; Adrian Munteanu; Peter Schelkens

Depth-based view synthesis can produce novel realistic images of a scene by view warping and image inpainting. This paper presents a depth-based view synthesis approach performing pixel-level image inpainting. The proposed approach provides great flexibility in pixel manipulation and prevents random effects in texture propagation. By analyzing the process generating image holes in view warping, we firstly classify such areas into simple holes and disocclusion areas. Based on depth information constraints and different strategies for random propagation, an approximate nearest-neighbor match based pixel-level inpainting is introduced to complete holes from the two classes. Experimental results demonstrate that the proposed view synthesis method can effectively produce smooth textures and reasonable structure propagation. The proposed depth-based pixel-level inpainting is well suitable to multi-view video and other higher dimensional view synthesis settings.


international conference on multimedia and expo | 2016

Efficient MRF-based disocclusion inpainting in multiview video

Beerend Ceulemans; Shao-Ping Lu; Gauthier Lafruit; Peter Schelkens; Adrian Munteanu

View synthesis using depth image-based rendering generates virtual viewpoints of a 3D scene based on texture and depth information from a set of available cameras. One of the core components in view synthesis is image inpainting which performs the reconstruction of areas that were occluded in the available cameras but are visible from the virtual viewpoint. Inpainting methods based on Markov random fields (MRFs) have been shown to be very effective in inpainting large areas in images. In this paper, we propose a novel MRF-based in-painting method for multiview video. The proposed method steers the MRF optimization towards completion from background to foreground and exploits the available depth information in order to avoid bleeding artifacts. The proposed approach allows for efficiently filling-in large disocclusion areas and greatly accelerates execution compared to traditional MRF-based inpainting techniques. The experimental results show that view synthesis based on the proposed inpainting method systematically improves performance over the state-of-the-art in multiview view synthesis. Average PSNR gains up to 1.88 dB compared to the MPEG View Synthesis Reference software were observed.


Tsinghua Science & Technology | 2016

A Survey on Multiview Video Synthesis and Editing

Shao-Ping Lu; Taijiang Mu; Song-Hai Zhang

Multiview video can provide more immersive perception than traditional single 2-D video. It enables both interactive free navigation applications as well as high-end autostereoscopic displays on which multiple users can perceive genuine 3-D content without glasses. The multiview format also comprises much more visual information than classical 2-D or stereo 3-D content, which makes it possible to perform various interesting editing operations both on pixel-level and object-level. This survey provides a comprehensive review of existing multiview video synthesis and editing algorithms and applications. For each topic, the related technologies in classical 2-D image and video processing are reviewed. We then continue to the discussion of recent advanced techniques for multiview video virtual view synthesis and various interactive editing applications. Due to the ongoing progress on multiview video synthesis and editing, we can foresee more and more immersive 3-D video applications will appear in the future.


3dtv-conference: the true vision - capture, transmission and display of 3d video | 2015

Globally optimized multiview video color correction using dense spatio-temporal matching

Beerend Ceulemans; Shao-Ping Lu; Peter Schelkens; Adrian Munteanu

Multiview video is becoming increasingly popular as the format for 3D video systems that use autostereoscopic displays or freeviewpoint navigation capabilities. However, the algorithms that drive these applications are not yet mature and can suffer from subtle irregularities such as color imbalances inbetween different cameras. Regarding the problem of color correction, state-of-the-art methods directly apply some form of histogram matching in blocks of pixels between an input frame and a target frame containing the desired color distribution. These methods, however, typically suffer from artifacts in the gradient domain, as they do not take into account local texture information. This paper presents a novel method to correct color differences in multiview video sequences that uses a dense matching-based global optimization framework. The proposed energy function ensures preservation of local structures by regulating deviations from the original image gradients.


international conference on d imaging | 2013

Performance optimizations for PatchMatch-based pixel-level multiview inpainting

Shao-Ping Lu; Beerend Ceulemans; Adrian Munteanu; Peter Schelkens

As 3D content is becoming ubiquitous in todays media landscape, there is a rising interest for 3D displays that do not demand wearing special headgear in order to experience the 3D effect. Autostereoscopic displays realize this by providing multiple different views of the same scene. It is however unfeasible to record, store or transmit the amount of data that such displays require. Therefore there is a strong need for real-time solutions that can generate multiple extra viewpoints from a limited set of originally recorded views. The main difficulty in current solutions is that the synthesized views contain disocclusion holes where the pixel values are unknown. In order to seamlessly fill-in these holes, inpainting techniques are being used. In this work we consider a depth-based pixel-level inpainting system for multiview video. The employed technique operates in a multi-scale fashion, fills in the disocclusion holes on a pixel-per-pixel basis and computes approximate Nearest Neighbor Fields (NNF) to identify pixel correspondences. To this end, we employ a multi-scale variation on the well-known PatchMatch algorithm followed by a refinement step to escape from local minima in the matching-cost function. In this paper we analyze the performance of different cost functions and search methods within our existing inpainting framework.


IEEE Transactions on Image Processing | 2018

Hyper-Lapse From Multiple Spatially-Overlapping Videos

Miao Wang; Jun-Bang Liang; Song-Hai Zhang; Shao-Ping Lu; Ariel Shamir; Shi-Min Hu

Hyper-lapse video with high speed-up rate is an efficient way to overview long videos, such as a human activity in first-person view. Existing hyper-lapse video creation methods produce a fast-forward video effect using only one video source. In this paper, we present a novel hyper-lapse video creation approach based on multiple spatially-overlapping videos. We assume the videos share a common view or location, and find transition points where jumps from one video to another may occur. We represent the collection of videos using a hyper-lapse transition graph; the edges between nodes represent possible hyper-lapse frame transitions. To create a hyper-lapse video, a shortest path search is performed on this digraph to optimize frame sampling and assembly simultaneously. Finally, we render the hyper-lapse results using video stabilization and appearance smoothing techniques on the selected frames. Our technique can synthesize novel virtual hyper-lapse routes, which may not exist originally. We show various application results on both indoor and outdoor video collections with static scenes, moving objects, and crowds.


Signal Processing-image Communication | 2017

Color correction for large-baseline multiview video

Siqi Ye; Shao-Ping Lu; Adrian Munteanu

Color misalignment correction is an important, yet unsolved problem, especially for multiview video captured by large disparity camera setups. In this paper, we introduce a robust large-baseline color correction method that preserves the original manifold structure of the input video. The manifold structure is extracted by locally linear embedding (LLE), aimed at linearly representing each pixel based on its neighbors, assuming that they are all clustered in a high-dimensional feature space. Besides the proposed manifold structure preservation constraint, the proposed method enforces spatio-temporal color consistencies and gradient preservation. The multiview color correction solution is obtained by solving a global optimization problem. Thorough objective and subjective experimental results demonstrate that our proposed approach significantly and systematically outperforms the state-of-the-art color correction methods on large-baseline multiview video data. HighlightsWe introduce a manifold structure preservation based multiview color correction approach.Both objective and subjective evaluations demonstrate that our algorithm outperforms the state-of-the-art algorithms especially for large-baseline Multiview video.

Collaboration


Dive into the Shao-Ping Lu's collaboration.

Top Co-Authors

Avatar

Adrian Munteanu

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Beerend Ceulemans

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Peter Schelkens

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gauthier Lafruit

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Jan Hanca

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duc Minh Nguyen

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar

Rui Zhong

Vrije Universiteit Brussel

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge