Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changqing Zou is active.

Publication


Featured researches published by Changqing Zou.


eurographics | 2014

Extended large scale sketch-based 3D shape retrieval

Bo Li; Yijuan Lu; C. Li; Afzal Godil; Tobias Schreck; Masaki Aono; Martin Burtscher; Hongbo Fu; Takahiko Furuya; Henry Johan; Jianzhuang Liu; Ryutarou Ohbuchi; Atsushi Tatsuma; Changqing Zou

Large scale sketch-based 3D shape retrieval has received more and more attentions in the community of content-based 3D object retrieval. The objective of this track is to evaluate the performance of different sketch-based 3D model retrieval algorithms using a large scale hand-drawn sketch query dataset on a comprehensive 3D model dataset. The benchmark contains 12,680 sketches and 8,987 3D models, divided into 171 distinct classes. In this track, 12 runs were submitted by 4 groups and their retrieval performance was evaluated using 7 commonly used retrieval performance metrics. We hope that this benchmark, the comparative evaluation results and the corresponding evaluation code will further promote the progress of this research direction for the 3D model retrieval community.


IEEE Transactions on Visualization and Computer Graphics | 2015

Progressive 3D Reconstruction of Planar-Faced Manifold Objects with DRF-Based Line Drawing Decomposition

Changqing Zou; Shifeng Chen; Hongbo Fu; Jianzhuang Liu

This paper presents an approach for reconstructing polyhedral objects from single-view line drawings. Our approach separates a complex line drawing representing a manifold object into a series of simpler line drawings, based on the degree of reconstruction freedom (DRF). We then progressively reconstruct a complete 3D model from these simpler line drawings. Our experiments show that our decomposition algorithm is able to handle complex drawings which are challenging for the state of the art. The advantages of the presented progressive 3D reconstruction method over the existing reconstruction methods in terms of both robustness and efficiency are also demonstrated.This paper presents an approach for reconstructing polyhedral objects from single-view line drawings. Our approach separates a complex line drawing representing a manifold object into a series of simpler line drawings, based on the degree of reconstruction freedom (DRF). We then progressively reconstruct a complete 3D model from these simpler line drawings. Our experiments show that our decomposition algorithm is able to handle complex drawings which are challenging for the state of the art. The advantages of the presented progressive 3D reconstruction method over the existing reconstruction methods in terms of both robustness and efficiency are also demonstrated.


IEEE Signal Processing Letters | 2014

Face Sketch Landmarks Localization in the Wild

Heng Yang; Changqing Zou; Ioannis Patras

In this letter, we propose a method for facial landmarks localization in face sketch images. As recent approaches and the corresponding datasets are designed for ordinary face photos, the performance of such models drop significantly when they are applied on face sketch images. We first propose a scheme to synthesize face sketches from face photos based on random-forests edge detection and local face region enhancement. Then we jointly train a Cascaded Pose Regression based method for facial landmarks localization for both face photos and sketches. We build an evaluation dataset, called Face Sketches in the Wild (FSW), with 450 face sketch images collected from the Internet and with the manual annotation of 68 facial landmark locations on each face sketch. The proposed multi-modality facial landmark localization method shows competitive performance on both face sketch images (the FSW dataset) and face photo images (the Labeled Face Parts in the Wild dataset), despite the fact that we do not use extra annotation of face sketches for model building.


international conference on computer graphics and interactive techniques | 2016

Action-driven 3D indoor scene evolution

Rui Ma; Honghua Li; Changqing Zou; Zicheng Liao; Xin Tong; Hao Zhang

We introduce a framework for action-driven evolution of 3D indoor scenes, where the goal is to simulate how scenes are altered by human actions, and specifically, by object placements necessitated by the actions. To this end, we develop an action model with each type of action combining information about one or more human poses, one or more object categories, and spatial configurations of objects belonging to these categories which summarize the object-object and object-human relations for the action. Importantly, all these pieces of information are learned from annotated photos. Correlations between the learned actions are analyzed to guide the construction of an action graph. Starting with an initial 3D scene, we probabilistically sample a sequence of actions from the action graph to drive progressive scene evolution. Each action triggers appropriate object placements, based on object co-occurrences and spatial configurations learned for the action model. We show results of our scene evolution that lead to realistic and messy 3D scenes, as well as quantitative evaluations by user studies which compare our method to manual scene creation and state-of-the-art, data-driven methods, in terms of scene plausibility and naturalness.


Computers & Graphics | 2015

Sketch-based 3-D modeling for piecewise planar objects in single images

Changqing Zou; Xiaojiang Peng; Hao Lv; Shifeng Chen; Hongbo Fu; Jianzhuang Liu

3-D object modeling from single images has many applications in computer graphics and multimedia. Most previous 3-D modeling methods which directly recover 3-D geometry from single images require user interactions during the whole modeling process. In this paper, we propose a semi-automatic 3-D modeling approach to recover accurate 3-D geometry from a single image of a piecewise planar object with less user interaction. Our approach concentrates on these three aspects: (1) requiring rough sketch input only, (2) accurate modeling for a large class of objects, and (3) automatically recovering the invisible part of an object and providing a complete 3-D model. Experimental results on various objects show that the proposed approach provides a good solution to these three problems. Graphical abstractWe propose a sketch-based planar object modeling system. The flow chart of our approach goes as the following steps: (1) The original image. (2) Roughly sketched vertices and edges for the visible part of the desired object. (3) Automatically generated 2-D initial wireframe W 2 di . Note that some vertices in W 2 di have no precise locations, which are highlighted with red stars. (4) Precise reconstruction of 3-D wireframe M v 3 dw from W 2 di . (5) Inference of the hidden 3-D geometries (marked with dotted lines). (6) Complete 3-D objects.Display Omitted HighlightsWe reconstruct a complete object from a user-drawn sketch depicting its visible part.The input sketch consists of two elements: sketch vertices and sketch lines.The 3-D reconstruction process in our system is semi-automatic.The system can produce a precise planar object.


computer vision and pattern recognition | 2014

Separation of Line Drawings Based on Split Faces for 3D Object Reconstruction

Changqing Zou; Heng Yang; Jianzhuang Liu

Reconstructing 3D objects from single line drawings is often desirable in computer vision and graphics applications. If the line drawing of a complex 3D object is decomposed into primitives of simple shape, the object can be easily reconstructed. We propose an effective method to conduct the line drawing separation and turn a complex line drawing into parametric 3D models. This is achieved by recursively separating the line drawing using two types of split faces. Our experiments show that the proposed separation method can generate more basic and simple line drawings, and its combination with the example-based reconstruction can robustly recover wider range of complex parametric 3D objects than previous methods.


international conference on computer graphics and interactive techniques | 2016

Legible compact calligrams

Changqing Zou; Junjie Cao; Warunika Ranaweera; Ibraheem Alhashim; Ping Tan; Alla Sheffer; Hao Zhang

A calligram is an arrangement of words or letters that creates a visual image, and a compact calligram fits one word into a 2D shape. We introduce a fully automatic method for the generation of legible compact calligrams which provides a balance between conveying the input shape, legibility, and aesthetics. Our method has three key elements: a path generation step which computes a global layout path suitable for embedding the input word; an alignment step to place the letters so as to achieve feature alignment between letter and shape protrusions while maintaining word legibility; and a final deformation step which deforms the letters to fit the shape while balancing fit against letter legibility. As letter legibility is critical to the quality of compact calligrams, we conduct a large-scale crowd-sourced study on the impact of different letter deformations on legibility and use the results to train a letter legibility measure which guides the letter deformation. We show automatically generated calligrams on an extensive set of word-image combinations. The legibility and overall quality of the calligrams are evaluated and compared, via user studies, to those produced by human creators, including a professional artist, and existing works.


The Visual Computer | 2016

Mesh saliency detection via double absorbing Markov chain in feature space

Xiuping Liu; Pingping Tao; Junjie Cao; He Chen; Changqing Zou

We propose a mesh saliency detection approach using absorbing Markov chain. Unlike most of the existing methods based on some center-surround operator, our method employs feature variance to obtain insignificant regions and considers both background and foreground cues. Firstly, we partition an input mesh into a set of segments using Ncuts algorithm and then each segment is over segmented into patches based on Zernike coefficients. Afterwards, some background patches are selected by computing feature variance within the segments. Secondly, the absorbed time of each node is calculated via absorbing Markov chain with the background patches as absorbing nodes, which gives a preliminary saliency measure. Thirdly, a refined saliency result is generated in a similar way but with foreground nodes extracted from the preliminary saliency map as absorbing nodes, which inhibits the background and efficiently enhances salient foreground regions. Finally, a Laplacian-based smoothing procedure is utilized to spread the patch saliency to each vertex. Experimental results demonstrate that our scheme performs competitively against the state-of-the-art approaches.


Pattern Recognition | 2018

Multi-modal feature fusion for geographic image annotation

Ke Li; Changqing Zou; Shuhui Bu; Yun Liang; Jian Zhang; Minglun Gong

Abstract This paper presents a multi-modal feature fusion based framework to improve the geographic image annotation. To achieve effective representations of geographic images, the method leverages a low-to-high learning flow for both the deep and shallow modality features. It first extracts low-level features for each input image pixel, such as shallow modality features (SIFT, Color, and LBP) and deep modality features (CNNs). It then constructs mid-level features for each superpixel from low-level features. Finally it harvests high-level features from mid-level features by using deep belief networks (DBN). It uses a restricted Boltzmann machine (RBM) to mine deep correlations between high-level features from both shallow and deep modalities to achieve a final representation for geographic images. Comprehensive experiments show that this feature fusion based method achieves much better performances compared to traditional methods.


international conference on computer graphics and interactive techniques | 2017

Learning to group discrete graphical patterns

Zhaoliang Lun; Changqing Zou; Haibin Huang; Evangelos Kalogerakis; Ping Tan; Marie-Paule Cani; Hao Zhang

We introduce a deep learning approach for grouping discrete patterns common in graphical designs. Our approach is based on a convolutional neural network architecture that learns a grouping measure defined over a pair of pattern elements. Motivated by perceptual grouping principles, the key feature of our network is the encoding of element shape, context, symmetries, and structural arrangements. These element properties are all jointly considered and appropriately weighted in our grouping measure. To better align our measure with human perceptions for grouping, we train our network on a large, human-annotated dataset of pattern groupings consisting of patterns at varying granularity levels, with rich element relations and varieties, and tempered with noise and other data imperfections. Experimental results demonstrate that our deep-learned measure leads to robust grouping results.

Collaboration


Dive into the Changqing Zou's collaboration.

Top Co-Authors

Avatar

Hongbo Fu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hao Zhang

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Shifeng Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Ping Tan

Simon Fraser University

View shared research outputs
Top Co-Authors

Avatar

Junjie Cao

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

He Chen

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Honghua Li

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge