Lizhuang Ma
Shanghai Jiao Tong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lizhuang Ma.
Computer Graphics Forum | 2009
Xuezhong Xiao; Lizhuang Ma
Color transfer is an image processing technique which can produce a new image combining one source images contents with another images color style. While being able to produce convincing results, however, Reinhard et al.s pioneering work has two problems—mixing up of colors in different regions and the fidelity problem. Many local color transfer algorithms have been proposed to resolve the first problem, but the second problem was paid few attentions.
Pattern Recognition Letters | 2009
Canlin Li; Lizhuang Ma
The description of interest points is a critical aspect of point correspondence which is vital in some computer vision and pattern recognition tasks. SIFT descriptor has been proven to perform better on the distinctiveness and robustness than other local descriptors. But SIFT descriptor does not involve color and global information of feature point which provides powerfully distinguishable signals in feature description and matching tasks, so many mismatches may occur. This paper improves SIFT descriptor, and presents a new framework for feature descriptor based on SIFT by integrating color and global information with it. The proposed framework consists of the improved SIFT, color invariance components and global component. We use a log-polar histogram to build three color invariance components and the global component of the proposed framework, respectively. In addition, the elliptical neighboring region for every interest point is used so as to make the framework fully invariant to common affine transformations. Experimental comparison with three related feature descriptors is carried out in two groups of experiments, validating the proposed framework.
Journal of Visualization and Computer Animation | 2000
Janis P. Y. Wong; Rynson W. H. Lau; Lizhuang Ma
This paper presents a virtual sculpting method for interactive 3D object deformation. The method is based on the use of an electronic glove. A parametric control hand surface defined by an Open-Uniform B-Spline tensor product surface is first created to model the hand gesture. The geometric attributes of the object in the Euclidean 3D space are then mapped to the parametric domain of the control hand surface through a Ray-Projection method. By maintaining the distance between the mapped pairs, change of hand gesture can be efficiently transferred to control the deformation of the object.
The Visual Computer | 2009
Zhong Li; Lizhuang Ma; Xiaogang Jin; Zuoyong Zheng
We present a novel mesh denoising and smoothing method in this paper. Our approach starts by estimating the principal curvatures and mesh saliency value for each vertex. Then, we calculate the uniform principal curvature of each vertex based on the weighted average of local principal curvatures. After that, we use the weighted bi-quadratic Bézier surface to fit the neighborhood of each vertex using the least-square method and obtain the new vertex position by adjusting the parameters of the fitting surface. Experiments show that our smoothing method preserves the geometric feature of the original mesh model efficiently. Our approach also prevents the volume shrinkage of the input mesh and obtains smooth boundaries for non-closed mesh models.
IEEE Transactions on Image Processing | 2015
Hao Du; Shengfeng He; Bin Sheng; Lizhuang Ma; Rynson W. H. Lau
Image decolorization is a fundamental problem for many real-world applications, including monochrome printing and photograph rendering. In this paper, we propose a new color-to-gray conversion method that is based on a region-based saliency model. First, we construct a parametric color-to-gray mapping function based on global color information as well as local contrast. Second, we propose a region-based saliency model that computes visual contrast among pixel regions. Third, we minimize the salience difference between the original color image and the output grayscale image in order to preserve contrast discrimination. To evaluate the performance of the proposed method in preserving contrast in complex scenarios, we have constructed a new decolorization data set with 22 images, each of which contains abundant colors and patterns. Extensive experimental evaluations on the existing and the new data sets show that the proposed method outperforms the state-of-the-art methods quantitatively and qualitatively.
The Visual Computer | 2010
Zhifeng Xie; Yang Shen; Lizhuang Ma; Zhihua Chen
As the process of pasting a source video patch into a target video sequence, seamless video composition is an important and useful video editing operation. Recently, a novel composition approach based on Mean-Value Coordinates has been presented. However, its composition result is often degraded by smudging and discoloration artifacts. Thus we propose optimized mean-value cloning to eliminate these artifacts by matting technique and interpolation constraint. On the basis of this optimized approach, we further present a new framework for seamless video composition. In the framework, we first develop a propagation model based on contour flow to yield each trimap that provides each frame with a pre-segmentation: definite foreground, definite background and unknown. This propagation model constructs the contour flow of inter-frame by minimizing a cost function, and employs it to relabel the trimap. Moreover, when the trimap propagation model is inefficient due to abrupt feature change and complex scene pattern, our framework has also implemented a convenient interactive tool to create and modify trimap. Then, we can use the high-quality trimap to achieve the optimized mean-value cloning. Experimental results demonstrate the effectiveness of our seamless video composition framework.
The Visual Computer | 2006
Zhihong Mao; Lizhuang Ma; Mingxi Zhao; Xuezhong Xiao
Motivated by the impressive effect of the SUSAN operator for low level image processing and its usage simplicity, we extend it to denoise the 3D mesh. We use the angle between the normals on the surface to determine the SUSAN area; each point has associated itself with the SUSAN area that is has a similar continuity feature to the point. The SUSAN area avoids the feature to be taken as noise effectively, so the SUSAN operator gives the maximal number of suitable neighbors with which to take an average, whilst no neighbors from unrelated regions are involved. Thus, the entire structure can be preserved. We also extend the SUSAN operator to two-ring neighbors by a squared umbrella-operator to improve the surface smoothness with little loss of detailed features. Details of the SUSAN structure preserving noise reduction algorithm are discussed along with the test results in this paper.
virtual reality software and technology | 2005
Yan Gao; Lizhuang Ma; Zhihua Chen; Xiaomao Wu
In this paper, we present an online algorithm to normalize all motion data in database with a common skeleton length. Our algorithm is very simple and efficient. The input motion stream is processed sequentially while the computation for a single frame at each step requires only the results from the previous step over a neighborhood of nearby backward frames. In contrast to previous motion retargeting approaches, we simplify the constraint condition of retargeting problem, which leads to the simpler solutions. Moreover, we improve Shin et al.s algorithm [10], which is adopted by a widely used Kovars footskate cleanup algorithm [6] through adding one case missed by it.
IEEE Transactions on Circuits and Systems for Video Technology | 2013
Yong Li; Bin Sheng; Lizhuang Ma; Wen Wu; Zhifeng Xie
Saliency detection for images and videos has become increasingly popular due to its wide applicability. In this paper, we present a new method that takes advantage of region-based visual dynamic contrast to generate temporally coherent video saliency maps. The concept of visual dynamics is formulated to represent both visual and motional variabilities of video content. Moreover, the regions are regarded as primitives for saliency computation by using spatiotemporal appearance contrasts. Then, region matching is performed across successive video frames to form temporally coherent regions, which are computed on the basis of spatiotemporal similarity in the visual dynamics of the different regions along the optical flow in the video. The region matching can effectively eliminate saliency discontinuities, particularly in the areas of oversegmentation that are otherwise highly problematic. The proposed approach is tested on a challenging set of video sequences and is compared with contemporary methods to demonstrate its superior performance in terms of its computational efficiency and ability to detect salient video content.
Pattern Recognition Letters | 2007
Dongdong Nie; Qinyong Ma; Lizhuang Ma; Shuangjiu Xiao
An optimization based interactive grayscale image colorization method is presented in this paper. It is an interactive colorization method, whereas the only thing user need to do is to provide some color hints by scribbling or seed pixels. The main contribution of this paper is that the colorization method greatly reduces computation time with the same good results in image quality by quadtree decomposition based non-uniform sampling. Moreover, by introducing a new simple weighting function to represent intensity similarity in the cost function, annoying color diffusion among different regions is alleviated. Experiments show that this method gives the same good quality of colorized images as the method of Levin et al. with a fraction of the computational cost.