Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cheng-Chieh Chiang is active.

Publication


Featured researches published by Cheng-Chieh Chiang.


Journal of Visual Communication and Image Representation | 2009

Region-based image retrieval using color-size features of watershed regions

Cheng-Chieh Chiang; Yi-Ping Hung; Hsuan Yang; Greg C. Lee

This paper presents a region-based image retrieval system that provides a user interface for helping to specify the watershed regions of interest within a query image. We first propose a new type of visual features, called color-size feature, which includes color-size histogram and moments, to integrate color and region-size information of watershed regions. Next, we design a scheme of region filtering that is based on color-size histogram to fast screen out some of most irrelevant regions and images for the preprocessing of the image retrieval. Our region-based image retrieval system applies the Earth Movers Distance in the design of the similarity measure for image ranking and matching. Finally, we present some experiments for the color-size feature, region filtering, and retrieval results that demonstrate the efficiency of our proposed system.


Multimedia Tools and Applications | 2015

Quick browsing and retrieval for surveillance videos

Cheng-Chieh Chiang; Huei-Fang Yang

Searching for specific targets from surveillance videos requires huge workforce due to a surveillance system usually generates great amounts of video data. To alleviate the effort of human analysis, a system that helps users quickly look for targets of interest is highly demanded. In this paper, we propose a browsing and retrieval system for users to quickly locate desired targets in surveillance videos. Our basic idea is to collect all moving objects, which carry the most significant information in surveillance videos, to construct a corresponding compact video. The temporal coordinates of the moving objects in the compact video are rearranged, therefore increasing the compactness of the video. However, the appearing order of the moving objects is kept to preserve the essential activities involved in the original surveillance video. Using our system, users will spend only several minutes watching the compact video instead of hours monitoring a long surveillance video. We conducted experiments to demonstrate that the proposed system can help users quickly look for specific targets in surveillance videos.


Computer Standards & Interfaces | 2013

Interactive tool for image annotation using a semi-supervised and hierarchical approach

Cheng-Chieh Chiang

This paper presents a semi-automatic tool, called IGAnn (Interactive image ANNotation), that assists users in annotating textual labels with images. IGAnn performs an interactive retrieval-like procedure: the system presents the user with images that have higher confidences, and then the user determines which images are actually relevant or irrelevant for a specified label. By collecting relevant and irrelevant images of iterations, a hierarchical classifier associated with the specified label is built using our proposed semi-supervised approach to compute confidence values of unlabeled images. This paper describes the system interface of IGAnn and also demonstrates quantitative experiments of our proposed approach.


conference on image and video retrieval | 2005

Region filtering using color and texture features for image retrieval

Cheng-Chieh Chiang; Ming Han Hsieh; Yi-Ping Hung; Greg C. Lee

This paper presents a region-based image retrieval (RBIR) system in which users can choose specific regions as the query. Our goal is to assist the user to formulate more precise queries with which the retrieval system can focus on the user’s interested part. In this work, images are partitioned into a set of regions by using the watershed segmentation. Color-size histogram and Gabor texture features are extracted from each watershed region. We propose a scheme of region filtering based on individual features, rather than integrating different features, to reduce the computational load of the image retrieval. This paper also defines the dissimilarity measure of images, and therefore relevance feedback is used for improving our retrieval. Finally we describe some experimental results of our RBIR system.


acm multimedia | 2008

Localization and mapping of surveillance cameras in city map

Wee Kheng Leow; Cheng-Chieh Chiang; Yi-Ping Hung

Many large cities have installed surveillance cameras to monitor human activities for security purposes. An important surveillance application is to track the motion of an object of interest, e.g., a car or a human, using one or more cameras, and plot the motion path in a city map. To achieve this goal, it is necessary to localize the cameras in the city map and to determine the correspondence mappings between the positions in the city map and the camera views. Since the view of the city map is roughly orthogonal to the camera views, there are very few common features between the two views for a computer vision algorithm to correctly identify corresponding points automatically. This paper proposes a method for camera localization and position mapping that requires minimum user inputs. Given approximate corresponding points between the city map and a camera view identified by a user, the method computes the orientation and position of the camera in the city map, and determines the mapping between the positions in the city map and the camera view. Both quantitative tests and practical application test have been performed. It can obtain the best-fit solutions even though the user-specified correspondence is inaccurate. The performance of the method is assessed in both quantitative tests and practical application. Quantitative test results show that the method is accurate and robust in camera localization and position mapping. Application test results are very encouraging, showing the usefulness of the method in real applications.


EURASIP Journal on Advances in Signal Processing | 2007

Content-based object movie retrieval and relevance feedbacks

Cheng-Chieh Chiang; Li Wei Chan; Yi-Ping Hung; Greg C. Lee

Object movie refers to a set of images captured from different perspectives around a 3D object. Object movie provides a good representation of a physical object because it can provide 3D interactive viewing effect, but does not require 3D model reconstruction. In this paper, we propose an efficient approach for content-based object movie retrieval. In order to retrieve the desired object movie from the database, we first map an object movie into the sampling of a manifold in the feature space. Two different layers of feature descriptors, dense and condensed, are designed to sample the manifold for representing object movies. Based on these descriptors, we define the dissimilarity measure between the query and the target in the object movie database. The query we considered can be either an entire object movie or simply a subset of views. We further design a relevance feedback approach to improving retrieved results. Finally, some experimental results are presented to show the efficacy of our approach.


EURASIP Journal on Advances in Signal Processing | 2007

A learning state-space model for image retrieval

Cheng-Chieh Chiang; Yi-Ping Hung; Greg C. Lee

This paper proposes an approach based on a state-space model for learning the user concepts in image retrieval. We first design a scheme of region-based image representation based on concept units, which are integrated with different types of feature spaces and with different region scales of image segmentation. The design of the concept units aims at describing similar characteristics at a certain perspective among relevant images. We present the details of our proposed approach based on a state-space model for interactive image retrieval, including likelihood and transition models, and we also describe some experiments that show the efficacy of our proposed model. This work demonstrates the feasibility of using a state-space model to estimate the user intuition in image retrieval.


Ninth International Conference on Information Visualisation (IV'05) | 2005

Visualization for high-dimensional data: VisHD

Cheng Chih Yang; Cheng-Chieh Chiang; Yi-Ping Hung; Greg C. Lee

This paper presents a visualization tool, VisHD, that can visualize the spatial distribution of vector points in high dimensional feature space. It is important to handle high dimensional information in many areas of computer science. VisHD provides several methods for dimension reduction in order to map the data from high dimensional space to low dimensional one. Next, this system builds intuitive visualization for observing the characteristics of the data set, whether these data are pre-defined labels or not. In addition, some useful functions have been implemented to facilitate the information visualization. This paper, finally, gives some experiments and discussions for showing the abilities of VisHD for visualizing high-dimensional data.


Journal of Multimedia | 2014

Multi-Pose Face Detection and Tracking Using Condensation

Cheng-Chieh Chiang; Kai-Ming Wang; Greg C. Lee

Automatically locating face areas can advance applications either in images or videos. This paper proposes a video-based approach for face detection and tracking in an indoor environment to determine where face areas appear in video sequences. Our approach involves four main modules: an initialization module for setting all configurations, a Condensation module for face tracking, a template module for measuring the observation process in Condensation, and a correction module for correcting the tracking if the tracked face has been lost. We adapted the Condensation algorithm for dealing with the face tracking problem, and designed a checklist scheme for the template module that can record the most significant templates of the tracked face poses. We also performed experiments to demonstrate the performance and the robustness of our proposed approach for face detection and tracking.


Multimedia Tools and Applications | 2012

Probabilistic semantic component descriptor

Cheng-Chieh Chiang; Jia Wei Wu; Greg C. Lee

This paper proposes the probabilistic semantic component descriptor (PSCD) for automatically extracting semantic information in a set of images. The basic idea of the PSCD is first to identify what kinds of hidden semantic concepts associated with regions in a set of images and then to construct an image-based descriptor by integrating hidden concepts of regions in an image. First, low-level features of regions in images are quantized into a set of visual words. Visual words for representing region features and high-level concepts hidden in images are linked together using the unsupervised method probabilistic latent semantic analysis. The linkage of visual words and images is built on the entire set of images, and hence a set of hidden concepts to describe each of the regions is extracted. Next, regions with unreliable concepts are eliminated, and then a PSCD for each image is constructed by propagating the probabilities of hidden concepts in the remaining regions of an image. We also present quantitative experiments to demonstrate the performance of our proposed PSCD.

Collaboration


Dive into the Cheng-Chieh Chiang's collaboration.

Top Co-Authors

Avatar

Greg C. Lee

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Yi-Ping Hung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Jia Wei Wu

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Kai-Ming Wang

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Wee Kheng Leow

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Cheng Chih Yang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Fu-Hao Yeh

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Hsuan Yang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Kai Ming Wang

National Taiwan Normal University

View shared research outputs
Top Co-Authors

Avatar

Li Wei Chan

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge