Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taoran Lu is active.

Publication


Featured researches published by Taoran Lu.


international conference on image processing | 2010

Video retargeting with nonlinear spatial-temporal saliency fusion

Taoran Lu; Zheng Yuan; Yu Huang; Dapeng Wu; Heather Yu

Video retargeting (resolution adaptation) is a challenging problem for its highly subjective nature. In this paper, a nonlinear saliency fusing approach, that considers human perceptual characteristics for automatic video retargeting, is being proposed. First, we incorporate features from phase spectrum of quaternion Fourier Transform (PQFT) in spatial domain and global motion residual based on matched feature points by the Kanade-Lucas-Tomasi (KLT) tracker in temporal domain. In addition, under a cropping-and-scaling retargeting framework, we propose content-aware information loss metrics and a hierarchical search to find optimal cropping window parameters. Results show the success of our approach on detecting saliency regions and retargeting on images and videos.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Addressing Visual Consistency in Video Retargeting: A Refined Homogeneous Approach

Zheng Yuan; Taoran Lu; Yu Huang; Dapeng Wu; Heather Yu

For the video retargeting problem which adjusts video content into a smaller display device, it is not clear how to balance the three conflicting design objectives: 1) visual interestingness preservation; 2) temporal retargeting consistency; and 3) nondeformation. To understand their perceptual importance, we first identify that the latter two play a dominating role in making the retargeting results appealing. Then a statistical study on human response to the targeting scale is carried out, suggesting that the global preservation of contents pursued by most existing approaches is not necessary. Based on the newly prioritized objectives and the statistical findings, we design a video retargeting system which, as a refined homogeneous approach, addresses the temporal consistency issue holistically and is still capable of preserving high degree of visual interestingness. In particular, we propose a volume retargeting cost metric to jointly consider the retargeting objectives and formulate video retargeting as an optimization problem in graph representation. A dynamic programming solution is then given. In addition, we introduce a nonlinear fusion based attention model to measure the visual interestingness distribution. The experiment results from both image rendering and subjective tests indicate that our proposed attention modeling and video retargeting system outperform their conventional methods, respectively.


mobile and ubiquitous multimedia | 2011

Video summarization with semantic concept preservation

Zheng Yuan; Taoran Lu; Dapeng Wu; Yu Huang; Heather Yu

A compelling video summarization should allow viewers to understand the summary content and recover the original plot correctly. To this end, we materialize the abstract elements that are cognitively informative for viewers as concepts. They implicitly convey the semantic structure and are instantiated by semantically redundant instances. Then we analyze that a good summary should i) keep various concepts complete and balanced so as to give viewers comparable cognitive clues from a complete perspective ii) pursue the most saliency so that the rendered summary is attractive to human perception. We then formulate video summarization as an integer programming problem and give a ranking based solution. We also propose a novel method to discover the latent concepts by spectral clustering of bag-of-words features. Experiment results on human evaluation scores demonstrate that our summarization approach performs well in terms of the informativeness, enjoyability and scalibility.


Proceedings of SPIE | 2009

Tracking of Multiple Objects under Partial Occlusion

Bing Han; Christopher Paulson; Taoran Lu; Dapeng Wu; Jian Li

The goal of multiple object tracking is to find the trajectory of the target objects through a number of frames from an image sequence. Generally, multi-object tracking is a challenging problem due to illumination variation, object occlusion, abrupt object motion and camera motion. In this paper, we propose a multi-object tracking scheme based on a new weighted Kanade-Lucas-Tomasi (KLT) tracker. The original KLT tracking algorithm tracks global feature points instead of a target object, and the features can hardly be tracked through a long sequence because some features may easily get lost after multiple frames. Our tracking method consists of three steps: the first step is to detect moving objects; the second step is to track the features within the moving object mask, where we use a consistency weighted function; and the last step is to identify the trajectory of the object. With an appropriately chosen weighting function, we are able to identify the trajectories of moving objects with high accuracy. In addition, our scheme is able to handle partial object occlusion.


data compression conference | 2016

Compression Efficiency Improvement over HEVC Main 10 Profile for HDR and WCG Content

Taoran Lu; Fangjun Pu; Peng Yin; Yuwen He; Louis Joseph Kerofsky; Yan Ye; Zhouye Gu; David M. Baylon

The paper presents the joint proposal by Arris, Dolby and InterDigital as a response to the Call-for-Evidence of the High Dynamic Range and Wide Color Gamut (HDR/WCG) video compression in MPEG. The joint proposal introduces a set of new HDR coding technologies, including the IPT-PQ color space, the adaptive reshaping process, the color enhancement filters, and the adaptive transfer function. These new coding technologies are applied to the decoded output of an HEVC decoder. Hence, no changes to the lower level logics of the HEVC decoder are required to implement the proposal. Formal subjective tests conducted by MPEG confirmed that the proposal could achieve significant subjective quality improvements over the HEVC Main10 anchors at similar bit rates for HDR/WCG video content.


international conference on multimedia and expo | 2013

Orthogonal Muxing Frame Compatible Full Resolution technology for multi-resolution frame-compatible stereo coding

Taoran Lu; Hariharan Ganapathy; Gopi Lakshminarayanan; Tao Chen; Walt Husak; Peng Yin

This paper describes the Orthogonal Muxing Frame Compatible Full Resolution (OM-FCFR) technology that provides efficient compression and reconstruction of full resolution stereoscopic video while maintaining compatibility with stereoscopic AVC Frame Compatible (FC) bitstreams. The OM-FCFR is a dual-layer coding system and is built upon the MVC Stereo High Profile of the AVC. Orthogonal muxing is used to design the enhancement layer video signal. A “Reference Process Unit (RPU)” is employed for creating the inter-layer reference picture that can serve as a good prediction for an enhancement layer picture. Simulation results show that significant improvement over the frame compatible solution is achieved in both subjective and objective evaluations. This technology was submitted as a proposal in response to the call for proposals (CfP) on MPEG Frame- Compatible (MFC) enhancement technology issued by ISO/IEC JTC1/SC29/WG11 (MPEG) and the specification was adopted as the working draft for the MFC standard and AVC Amendment 5.


international conference on image processing | 2010

Video retargeting: A visual-friendly dynamic programming approach

Zheng Yuan; Taoran Lu; Yu Huang; Dapeng Wu; Heather Yu

Video retargeting is the task of fitting standard-sized video into arbitrary screen. A compelling retargeting attempts to preserve most visual information of original video as well as deliver a temporally consistent retargeted view. To handle long video sequences, we perform the task on a shot/subshot basis. For each frame, a crop pane is determined to optimally select a region of interest as the retargeted frame in two stages: i.e. minimizing visual information loss (intra-frame consideration) to yield source and destination crop pane parameters at boundary frames and minimizing visual information loss accumulation under the visual inertness (inter-frame consideration) constraints to search for a smooth transition of crop pane across interior frames. The second minimization process is remodeled as the shortest-path problem in graph theory and the parametric transition of crop panes is solved by dynamic programming. Experiments demonstrate our approach preserves salient regions of original video whilst offering eye-friendly visual consistency.


Proceedings of SPIE | 2016

Adaptive Reshaper for High Dynamic Range and Wide Color Gamut Video Compression

Taoran Lu; Fangjun Pu; Peng Yin; Jaclyn Pytlarz; Tao Chen; Walt Husak

High Dynamic Range (HDR) and Wider Color Gamut (WCG) content represents a greater range of luminance levels and a more complete reproduction of colors found in real-world scenes. The characteristics of HDR/WCG content are very different from the SDR content. It poses a challenge to the compression system which is originally designed for SDR content. Recently in MPEG/VCEG, two directions have been taken to improve compression performances for HDR/WCG video using HEVC Main10 codec. The first direction is to improve HDR-10 using encoder optimization. The second direction is to modify the video signal in pre/post processing to better fit compression system. The process therefore is out of coding loop and does not involve changes to the HEVC specification. Among many proposals in the second direction, reshaper is identified to be the key component. In this paper, a novel luma reshaper is presented which re-allocates the codewords to help codec improve subjective quality. In addition, encoder optimization can be performed jointly with reshaping. Experiments are conducted with ICtCp color difference signal. Simulation results show that if both joint optimization of reshaper and encoder are carried out, there is evidence that improvement over the HDR-10 anchor can be achieved.


international conference on consumer electronics | 2011

A video coding analyzer for next-generation compression standards

Taoran Lu; Xiaoan Lu; Qian Xu; Yunfei Zheng; Joel Sole; Peng Yin

A video coding analyzer enables users to quickly and easily check the codec performance through a graphical user interface. To assist the standardization effort of High Efficiency Video Coding (HEVC), we design an analyzer that takes the statistical data as the input and is independent of the syntax. We illustrate our design principles using the Test Model under Consideration (TMuC) for HEVC.


Applications of Digital Image Processing XLI | 2018

Adaptive reshaping for next generation video codec

Taoran Lu; Peng Yin; Fangjun Pu; Tao Chen; Walt Husak

Various earlier work in MPEG/JCTVC have shown that out-loop reshaping, which modifies the video signal in preprocessing before encoding and post-processing after decoding in an end-to-end video compression workflow, can improve subjective quality of coded High Dynamic Range (HDR) and Wider Color Gamut (WCG) content compressed using HEVC. However, the requirement of not making normative changes to the HEVC specification has significantly constrain the design and optimization of the reshaper. In April 2018, The Joint Video Experts Team (JVET) has launched a project to develop a new video coding standard to be known as Versatile Video Coding (VVC). This opens the door to exploit possibilities of the reshaper design inside of the core video codec. In this paper, an in-loop architect of reshaper is presented. Preliminary results suggest that the in-loop reshaping architect can retain the functionality of out-loop reshaper. In addition, the in-loop design can resolve many limitations of the out-loop design and can be used as a general coding tool for general video content not limited to HDR.

Collaboration


Dive into the Taoran Lu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dapeng Wu

Henan Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge