Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yongwei Nie is active.

Publication


Featured researches published by Yongwei Nie.


IEEE Transactions on Visualization and Computer Graphics | 2013

Compact Video Synopsis via Global Spatiotemporal Optimization

Yongwei Nie; Chunxia Xiao; Hanqiu Sun; Ping Li

Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.


IEEE Transactions on Visualization and Computer Graphics | 2011

Efficient Edit Propagation Using Hierarchical Data Structure

Chunxia Xiao; Yongwei Nie; Feng Tang

This paper presents a novel unified hierarchical structure for scalable edit propagation. Our method is based on the key observation that in edit propagation, appearance varies very smoothly in those regions where the appearance is different from the user-specified pixels. Uniformly sampling in these regions leads to redundant computation. We propose to use a quadtree-based adaptive subdivision method such that more samples are selected in similar regions and less in those that are different from the user-specified regions. As a result, both the computation and the memory requirement are significantly reduced. In edit propagation, an edge-preserving propagation function is first built, and the full solution for all the pixels can be computed by interpolating from the solution obtained from the adaptively subdivided domain. Furthermore, our approach can be easily extended to accelerate video edit propagation using an adaptive octree structure. In order to improve user interaction, we introduce several new Gaussian Mixture Model (GMM) brushes to find pixels that are similar to the user-specified regions. Compared with previous methods, our approach requires significantly less time and memory, while achieving visually same results. Experimental results demonstrate the efficiency and effectiveness of our approach on high-resolution photographs and videos.


The Visual Computer | 2013

Video retargeting combining warping and summarizing optimization

Yongwei Nie; Qing Zhang; Renfang Wang; Chunxia Xiao

We construct a unified interactive video retargeting system for video summarization, completion, and reshuffling. Our system combines the advantages of both video warping and summarizing processing. We first warp the video to present initial editing results, then refine the results using patch-based summarizing optimization, which mainly eliminates possible distortion produced in the warping step. We develop a Mean Value Coordinate (MVC) warping method due to its simplicity and efficiency used in the initialization. For refining processing, the summarization optimization is built on a 3D bidirectional similarity measure between the original and edited video, to preserve the coherence and completeness of the final editing result. We further improve the quality of summarization by applying a color histogram matching during the optimization, and accelerate the summarization optimization by using a constrained 3D Patch-Match algorithm. Experiment results show that the proposed video retargeting system effectively supports video summarization, completion, and reshuffling while avoiding issues like texture broken, video jittering, and detail losing.


IEEE Transactions on Visualization and Computer Graphics | 2014

Object movements synopsis via part assembling and stitching

Yongwei Nie; Hanqiu Sun; Ping Li; Chunxia Xiao; Kwan-Liu Ma

Video synopsis aims at removing videos less important information, while preserving its key content for fast browsing, retrieving, or efficient storing. Previous video synopsis methods, including frame-based and object-based approaches that remove valueless whole frames or combine objects from time shots, cannot handle videos with redundancies existing in the movements of video object. In this paper, we present a novel part-based object movements synopsis method, which can effectively compress the redundant information of a moving video object and represent the synopsized object seamlessly. Our method works by part-based assembling and stitching. The object movement sequence is first divided into several part movement sequences. Then, we optimally assemble moving parts from different part sequences together to produce an initial synopsis result. The optimal assembling is formulated as a part movement assignment problem on a Markov Random Field (MRF), which guarantees the most important moving parts are selected while preserving both the spatial compatibility between assembled parts and the chronological order of parts. Finally, we present a non-linear spatiotemporal optimization formulation to stitch the assembled parts seamlessly, and achieve the final compact video object synopsis. The experiments on a variety of input video objects have demonstrated the effectiveness of the presented synopsis method.


computer aided design and computer graphics | 2010

Fast multi-scale joint bilateral texture upsampling

Chunxia Xiao; Yongwei Nie; Wei Hua; Wenting Zheng

We present a new approach using a multi-scale joint bilateral filter for upsampling the synthesized texture generated by optimization-based methods. Our method is based on the following motivation: if the available exemplar texture is used as a priority to upsample the synthesized texture, a high resolution result that prevents image blurring can be obtained. Our multi-scale joint bilateral upsampling applies a spatial filter on each multi-scale layer of the synthesized texture, and jointly applies a similar range filter on exemplar texture, which guides the interpolation from low to high resolution, by magnifying and combining the upsampled information; the details of the upsampled texture are progressively enhanced, and the image blurring artifacts can be effectively avoided. We offer an accelerated joint bilateral filter, which enables our upsampling method to interactively generate a large texture. In addition, we propose a detail-aware texture optimization approach that incorporates image detail in texture optimization to improve the quality of the synthesized texture, on which the multi-scale joint bilateral filter works to generate a more convincing result. We show results for upsampling image and video textures and compare them to traditional upsampling methods, by this demonstrating that with low computational and memory costs, our method achieves better results.


Computers & Graphics | 2012

VEA 2012: Interactive image/video retexturing using GPU parallelism

Ping Li; Hanqiu Sun; Chen Huang; Jianbing Shen; Yongwei Nie

This paper presents an interactive retexturing approach that preserves similar underlying texture distortion between the original and retextured images/videos. The system offers real-time feedback interaction for easy control of target objects definition, texture selection with size adjusting, and overall lighting tuning using latest GPU parallelism. Existing retexturing and synthesis methods deal with texture distortion by inter-pixel distances manipulation, and the underlying texture distortion of the original images is always destroyed due to limitations like improper distortion caused by human mesh stretching, or unavoidable texture splitting through synthesis. The long processing time due to time-consuming filtering is also unacceptable. We propose to utilize SIFT corner features to naturally discover the underlying texture distortion. Gradient depth recovery and wrinkle energy optimization are applied to accomplish the distortion process. We facilitate the interactive retexturing upon needs of users via real-time bilateral grid and feature-guided texture distortion optimization using CUDA parallelism, and video retexturing is accomplished by a keyframe-based texture transferring using real-time TV-L^1 optical flow with patch-based block motion techniques. Our interactive retexturing using feature-guided gradient optimization provides realistic retexturing while preserving elite texture distortion in cornered area. In the experiments, our method consistently demonstrates high-quality image/video retexturing with real-time feedback interaction.


IEEE Transactions on Visualization and Computer Graphics | 2016

Underexposed Video Enhancement via Perception-Driven Progressive Fusion

Qing Zhang; Yongwei Nie; Ling Zhang; Chunxia Xiao

Underexposed video enhancement aims at revealing hidden details that are barely noticeable in LDR video frames with noise. Previous work typically relies on a single heuristic tone mapping curve to expand the dynamic range, which inevitably leads to uneven exposure and visual artifacts. In this paper, we present a novel approach for underexposed video enhancement using an efficient perception-driven progressive fusion. For an input underexposed video, we first remap each video frame using a series of tentative tone mapping curves to generate an multi-exposure image sequence that contains different exposed versions of the original video frame. Guided by some visual perception quality measures encoding the desirable exposed appearance, we locate all the best exposed regions from multi-exposure image sequences and then integrate them into a well-exposed video in a temporally consistent manner. Finally, we further perform an effective texture-preserving spatio-temporal filtering on this well-exposed video to obtain a high-quality noise-free result. Experimental results have shown that the enhanced video exhibits uniform exposure, brings out noticeable details, preserves temporal coherence, and avoids visual artifacts. Besides, we demonstrate applications of our approach to a set of problems including video dehazing, video denoising and HDR video reconstruction.


The Visual Computer | 2015

Content-aware model resizing with symmetry-preservation

Chunxia Xiao; Liqiang Jin; Yongwei Nie; Renfang Wang; Hanqiu Sun; Kwan-Liu Ma

This paper presents a new model resizing approach intended for preservation of important geometric content and geometric symmetries of the model to be resized. We first extract high-level symmetric regions and other low-level salient regions from the input model. Then, we map the extracted low-level and high-level geometry information to a protective tetrahedral mesh around the model, and we define a symmetry-preserving and content-aware resizing function on the tetrahedral mesh using 3D mean value coordinates space deformation. By interpolating within the resized tetrahedral mesh, we obtain the final resizing results for the embedded model. Interactively defined user constraints are also incorporated into our framework to produce more desirable results. Our results show that the resized models preserve the geometric features of important regions in addition to the symmetric aspects of the original model, and eliminate the undesirable distortion in other less important regions.


international conference on computer graphics and interactive techniques | 2016

Video stitching for handheld inputs via combined video stabilization

Tan Su; Yongwei Nie; Zhensong Zhang; Hanqiu Sun; Guiqing Li

Stitching videos captured by handheld devices is very useful, but also very challenging due to the heavy and independent shakiness in the videos. In this paper, we propose a hand-taken video stitching method which combines the techniques of video stitching and stabilization together into a unified optimization framework. In this way, our method can compute the most optimal stabilization and stitching results with respect to each other, which outperforms previous methods that take stabilization and stitching as separate operations. Our method is based on the framework of bundled camera paths [Liu et al. 2013]. We present a novel unified camera paths optimization formulation which consists of two stabilization terms and one stitching term. We also present a corresponding iterative solver that finds best stitching and stabilization solutions numerically. We compare our method with previous methods, and the experiments demonstrate the effectiveness of our method.


IEEE Transactions on Visualization and Computer Graphics | 2017

Homography Propagation and Optimization for Wide-Baseline Street Image Interpolation

Yongwei Nie; Zhensong Zhang; Hanqiu Sun; Tan Su; Guiqing Li

Wide-baseline street image interpolation is useful but very challenging. Existing approaches either rely on heavyweight 3D reconstruction or computationally intensive deep networks. We present a lightweight and efficient method which uses simple homography computing and refining operators to estimate piecewise smooth homographies between input views. To achieve the goal, we show how to combine homography fitting and homography propagation together based on reliable and unreliable superpixel discrimination. Such a combination, other than using homography fitting only, dramatically increases the accuracy and robustness of the estimated homographies. Then, we integrate the concepts of homography and mesh warping, and propose a novel homography-constrained warping formulation which enforces smoothness between neighboring homographies by utilizing the first-order continuity of the warped mesh. This further eliminates small artifacts of overlapping, stretching, etc. The proposed method is lightweight and flexible, allows wide-baseline interpolation. It improves the state of the art and demonstrates that homography computation suffices for interpolation. Experiments on city and rural datasets validate the efficiency and effectiveness of our method.

Collaboration


Dive into the Yongwei Nie's collaboration.

Top Co-Authors

Avatar

Hanqiu Sun

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhensong Zhang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Guiqing Li

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ping Li

Macau University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tan Su

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jianbing Shen

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Renfang Wang

Zhejiang Wanli University

View shared research outputs
Top Co-Authors

Avatar

Chen Huang

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge