William Chen
Columbia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William Chen.
IEEE Transactions on Circuits and Systems for Video Technology | 1998
Shih-Fu Chang; William Chen; Horace J. Meng; Hari Sundaram; Di Zhong
The rapidity with which digital information, particularly video, is being generated has necessitated the development of tools for efficient search of these media. Content-based visual queries have been primarily focused on still image retrieval. In this paper, we propose a novel, interactive system on the Web, based on the visual paradigm, with spatiotemporal attributes playing a key role in video retrieval. We have developed innovative algorithms for automated video object segmentation and tracking, and use real-time video editing techniques while responding to user queries. The resulting system, called VideoQ , is the first on-line video search engine supporting automatic object-based indexing and spatiotemporal queries. The system performs well, with the user being able to retrieve complex video clips such as those of skiers and baseball players with ease.
acm multimedia | 1997
Shih-Fu Chang; William Chen; Horace J. Meng; Hari Sundaram; Di Zhong
The rapidity with which digitat information, particularly video, is being generated, has necessitated the development of tools for efficient search of these media. Content based visual queries have been primarily focussed on still image retrieval. In this papel; we propose a novel, real-time, interactive system on the Web, based on the visual paradigm, with spatio-temporal attributesplaying a key role in video retrieval. We have developed algorithms for automated video object segmentation and tracking and use real-time video editing techniques while responding to user queries. The resulting system pe
Storage and Retrieval for Image and Video Databases | 1999
William Chen; Shih-Fu Chang
orms well, with the user being able to retrieve complex video clips such as those of skiers, baseball players, with ease.
workshop on applications of computer vision | 1998
Shih-Fu Chang; William Chen; Hari Sundaram
In this paper, we propose an efficient wavelet-based approach to achieve flexible and robust motion trajectory matching of video objects. By using the wavelet transform, our algorithm decomposes the raw object trajectory into components at different scales. We use the coarsest scale components to approximate the global motion information and the finer scale components to partition the global motion into subtrajectories. Each subtrajectory is then modeled by a set of spatial and temporal translation invariant attributes. Motion retrieval based on subtrajectory modeling has been tested and compared against other global trajectory matching schemes to show the advantages of our approach in achieving spatio-temporal invariance properties.
international conference on multimedia and expo | 2000
William Chen; Shih-Fu Chang
The rapidity with which digital information, particularly video, is being generated, has necessitated the development of tools for efficient search of these media. Content based visual queries have been primarily focused on still image retrieval. In this paper we propose a novel interactive system on the Web, based on the visual paradigm, with spatio-temporal attributes playing a key role in video retrieval. The resulting system VideoQ, is the first on-line video search engine supporting automatic object based indexing and spatio-temporal queries.
international conference on image processing | 1998
S.-F. Cheng; William Chen; Hari Sundaram
We describe a system that generates semantic visual templates (SVTs) for video databases. From a single query sketch, new queries are automatically generated with each one representing a different view of the initial sketch. The combination of the original and new queries forms a large set of potential queries for a content-based video retrieval system. Through Bayesian relevance feedback, the user narrows the choices to an exemplar set. This exemplar set, or SVTs, represents personalized views of a concept and an effective set of queries to retrieve a general category of images and videos. We have generated SVTs for several classes of videos, including sunsets, high jumpers, and slalom skiers. Our experiments show that the user can quickly converge upon SVTs with optimal performance, achieving over 85% of the precision from icons chosen by exhaustive search.
Archive | 1998
Shih-Fu Chang; William Chen; Horace J. Meng; Hari Sundaram; Di Zhong
Journal of Social Psychology | 1936
William Chen
international conference on image processing | 2001
William Chen; Shih-Fu Chang
Archive | 1999
Shih-Fu Chang; William Chen; Hari Sundaram