Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chih-Wen Su is active.

Publication


Featured researches published by Chih-Wen Su.


IEEE Transactions on Multimedia | 2007

Motion Flow-Based Video Retrieval

Chih-Wen Su; Hong-Yuan Mark Liao; Hsiao-Rong Tyan; Chia-Wen Lin; Duan-Yu Chen; Kuo-Chin Fan

In this paper, we propose the use of motion vectors embedded in MPEG bitstreams to generate so-called ldquomotion flowsrdquo, which are applied to perform video retrieval. By using the motion vectors directly, we do not need to consider the shape of a moving object and its corresponding trajectory. Instead, we simply ldquolinkrdquo the local motion vectors across consecutive video frames to form motion flows, which are then recorded and stored in a video database. In the video retrieval phase, we propose a new matching strategy to execute the video retrieval task. Motions that do not belong to the mainstream motion flows are filtered out by our proposed algorithm. The retrieval process can be triggered by query-by-sketch or query-by-example. The experiment results show that our method is indeed superb in the video retrieval process.


IEEE Transactions on Multimedia | 2011

Video Inpainting on Digitized Vintage Films via Maintaining Spatiotemporal Continuity

Nick C. Tang; Chiou-Ting Hsu; Chih-Wen Su; Timothy K. Shih; Hong-Yuan Mark Liao

Video inpainting is an important video enhancement technique used to facilitate the repair or editing of digital videos. It has been employed worldwide to transform cultural artifacts such as vintage videos/films into digital formats. However, the quality of such videos is usually very poor and often contain unstable luminance and damaged content. In this paper, we propose a video inpainting algorithm for repairing damaged content in digitized vintage films, focusing on maintaining good spatiotemporal continuity. The proposed algorithm utilizes two key techniques. Motion completion recovers missing motion information in damaged areas to maintain good temporal continuity. Frame completion repairs damaged frames to produce a visually pleasing video with good spatial continuity and stabilized luminance. We demonstrate the efficacy of the algorithm on different types of video clips.


international conference on multimedia and expo | 2002

A motion-tolerant dissolve detection algorithm

Chih-Wen Su; Hsiao-Rong Tyan; Hong-Yuan Mark Liao; Liang-Hua Chen

Gradual shot change detection is one of the most important research issues in the field of video indexing/retrieval. Among the numerous types of gradual transitions, dissolve is considered the most common one, but also the most difficult to be detected one. Its well known that an efficient dissolve detection algorithm which can be executed on a real video is still deficient. In this paper, we present a novel dissolve detection algorithm that can efficiently detect dissolves with different durations. In addition, global motions caused by camera movement and local motions caused by object movement can be discriminated from a real dissolve by our algorithm. The experimental results show that the new method is indeed powerful.


IEEE Transactions on Multimedia | 2011

Virtual Contour Guided Video Object Inpainting Using Posture Mapping and Retrieval

Chih-Hung Ling; Chia-Wen Lin; Chih-Wen Su; Yong-Sheng Chen; Hong-Yuan Mark Liao

This paper presents a novel framework for object completion in a video. To complete an occluded object, our method first samples a 3-D volume of the video into directional spatio-temporal slices, and performs patch-based image inpainting to complete the partially damaged object trajectories in the 2-D slices. The completed slices are then combined to obtain a sequence of virtual contours of the damaged object. Next, a posture sequence retrieval technique is applied to the virtual contours to retrieve the most similar sequence of object postures in the available non-occluded postures. Key-posture selection and indexing are used to reduce the complexity of posture sequence retrieval. We also propose a synthetic posture generation scheme that enriches the collection of postures so as to reduce the effect of insufficient postures. Our experiment results demonstrate that the proposed method can maintain the spatial consistency and temporal motion continuity of an object simultaneously.


Journal of Visual Communication and Image Representation | 2003

On the preview of digital movies

Liang-Hua Chen; Chih-Wen Su; Hong-Yuan Mark Liao; Chun-Chieh Shih

In this paper, a new technique is proposed for the automatic generation of a preview sequence of a feature film. The input video is decomposed into a number of basic components called shots. In this step, the proposed shot change detection algorithm is able to detect both the abrupt and gradual transition boundary. Then, shots are grouped into semantic-related scenes by taking into account the visual characteristics and temporal dynamics of video. Finally, by making use of an empirically motivated approach, the intense-interaction and action scenes are extracted to form the abstracting video. Compared with related works which integrate visual and audio information, our visual-based approach is computationally simple yet effective. � 2003 Elsevier Science (USA). All rights reserved.


international conference on multimedia and expo | 2009

An online people counting system for electronic advertising machines

Duan-Yu Chen; Chih-Wen Su; Yi-Chong Zeng; Shih-Wei Sun; Wei-Ru Lai; Hong-Yuan Mark Liao

This paper presents a novel people counting system for an environment in which a stationary camera can count the number of people watching a TV-wall advertisement or an electronic billboard without counting the repetitions in video streams in real time. The people actually watching an advertisement are identified via frontal face detection techniques. To count the number of people precisely, a complementary set of features is extracted from the torso of a human subject, as that part of the body contains relatively richer information than the face. In addition, for conducting robust people recognition, an online classifier trained by Fishers Linear Discriminant (FLD) strategy is developed. Our experiment results demonstrate the efficacy of the proposed system for the people counting task.


international conference on image processing | 2009

Video object inpainting using posture mapping

Chih-Hung Ling; Chia-Wen Lin; Chih-Wen Su; Hong-Yuan Mark Liao; Yong-Sheng Chen

This paper presents a novel framework for object-based video inpainting. To complete an occluded object, our method first samples a 3-D volume of the video into directional spatio-temporal slices, and then performs patch-based image inpainting to repair the partially damaged object trajectories in the 2-D slices. The completed slices are subsequently combined to obtain a sequence of virtual contours of the damaged object. The virtual contours and a posture sequence retrieval technique are then used to retrieve the most similar sequence of object postures in the available non-occluded postures. Key-posture selection and indexing are performed to reduce the complexity of posture sequence retrieval. We also propose a synthetic posture generation scheme that enriches the collection of key-postures so as to reduce the effect of insufficient key-postures. Our experimental results demonstrate that the proposed method can maintain the spatial consistency and temporal motion continuity of an object simultaneously.


Pattern Recognition Letters | 2004

Extraction of video object with complex motion

Liang-Hua Chen; Yu-Chun Lai; Chih-Wen Su; Hong-Yuan Mark Liao

Object segmentation is a problem within the scope of MPEG-4 standardization activities. This paper proposes a novel technique to extract the moving objects in the video sequences. Our approach is based on the integration of mosaic-based temporal segmentation and color-based spatial segmentation. The mosaic representation of video allows us to fully exploit the spatio-temporal information in the video scene to achieve robust segmentation. Compared with the related works which detect motion by the difference of two consecutive frames, our approach uses the information aggregated over a group of frames. Thus, our system is more robust and is able to extract the non-rigid object that has complex motion from one frame to the next.


international symposium on circuits and systems | 2009

A vision-based people counting approach based on the symmetry measure

Chih-Wen Su; Hong-Yuan Mark Liao; Hsiao-Rong Tyan

In this paper, we propose a vision-based people counting system. The symmetry property of human torso is utilized for the detection of people. Using the symmetry property as feature, one can solve the incomplete extraction of human silhouette problem that is commonly encountered in the process of background subtraction. In addition, since the perspective projection effect may influence the orientation of principal axis of a torso, we also propose a solution to address this issue. The experiment results show that the proposed system can perform accurate people counting.


international conference on pattern recognition | 2010

An RST-Tolerant Shape Descriptor for Object Detection

Chih-Wen Su; Hong-Yuan Mark Liao; Yu-Ming Liang; Hsiao-Rong Tyan

In this paper, we propose a new object detection method that does not need a learning mechanism. Given a hand-drawn model as a query, we can detect and locate objects that are similar to the query model in cluttered images. To ensure the invariance with respect to rotation, scaling, and translation (RST), high curvature points (HCPs) on edges are detected first. Each pair of HCPs is then used to determine a circular region and all edge pixels covered by the circular region are transformed into a polar histogram. Finally, we use these local descriptors to detect and locate similar objects within any images. The experiment results show that the proposed method outperforms the existing state-of-the-art work.

Collaboration


Dive into the Chih-Wen Su's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hsiao-Rong Tyan

Chung Yuan Christian University

View shared research outputs
Top Co-Authors

Avatar

Liang-Hua Chen

Fu Jen Catholic University

View shared research outputs
Top Co-Authors

Avatar

Chia-Wen Lin

National Tsing Hua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chih-Hung Ling

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar

Kuo-Chin Fan

National Central University

View shared research outputs
Top Co-Authors

Avatar

Yong-Sheng Chen

National Chiao Tung University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chiou-Ting Hsu

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge