Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Chen is active.

Publication


Featured researches published by William Chen.


IEEE Transactions on Circuits and Systems for Video Technology | 1998

A fully automated content-based video search engine supporting spatiotemporal queries

Shih-Fu Chang; William Chen; Horace J. Meng; Hari Sundaram; Di Zhong

The rapidity with which digital information, particularly video, is being generated has necessitated the development of tools for efficient search of these media. Content-based visual queries have been primarily focused on still image retrieval. In this paper, we propose a novel, interactive system on the Web, based on the visual paradigm, with spatiotemporal attributes playing a key role in video retrieval. We have developed innovative algorithms for automated video object segmentation and tracking, and use real-time video editing techniques while responding to user queries. The resulting system, called VideoQ , is the first on-line video search engine supporting automatic object-based indexing and spatiotemporal queries. The system performs well, with the user being able to retrieve complex video clips such as those of skiers and baseball players with ease.


acm multimedia | 1997

VideoQ: an automated content based video search system using visual cues

Shih-Fu Chang; William Chen; Horace J. Meng; Hari Sundaram; Di Zhong

The rapidity with which digitat information, particularly video, is being generated, has necessitated the development of tools for efficient search of these media. Content based visual queries have been primarily focussed on still image retrieval. In this papel; we propose a novel, real-time, interactive system on the Web, based on the visual paradigm, with spatio-temporal attributesplaying a key role in video retrieval. We have developed algorithms for automated video object segmentation and tracking and use real-time video editing techniques while responding to user queries. The resulting system pe


Storage and Retrieval for Image and Video Databases | 1999

Motion trajectory matching of video objects

William Chen; Shih-Fu Chang

orms well, with the user being able to retrieve complex video clips such as those of skiers, baseball players, with ease.


workshop on applications of computer vision | 1998

VideoQ: a fully automated video retrieval system using motion sketches

Shih-Fu Chang; William Chen; Hari Sundaram

In this paper, we propose an efficient wavelet-based approach to achieve flexible and robust motion trajectory matching of video objects. By using the wavelet transform, our algorithm decomposes the raw object trajectory into components at different scales. We use the coarsest scale components to approximate the global motion information and the finer scale components to partition the global motion into subtrajectories. Each subtrajectory is then modeled by a set of spatial and temporal translation invariant attributes. Motion retrieval based on subtrajectory modeling has been tested and compared against other global trajectory matching schemes to show the advantages of our approach in achieving spatio-temporal invariance properties.


international conference on multimedia and expo | 2000

Generating semantic visual templates for video databases

William Chen; Shih-Fu Chang

The rapidity with which digital information, particularly video, is being generated, has necessitated the development of tools for efficient search of these media. Content based visual queries have been primarily focused on still image retrieval. In this paper we propose a novel interactive system on the Web, based on the visual paradigm, with spatio-temporal attributes playing a key role in video retrieval. The resulting system VideoQ, is the first on-line video search engine supporting automatic object based indexing and spatio-temporal queries.


international conference on image processing | 1998

Semantic visual templates: linking visual features to semantics

S.-F. Cheng; William Chen; Hari Sundaram

We describe a system that generates semantic visual templates (SVTs) for video databases. From a single query sketch, new queries are automatically generated with each one representing a different view of the initial sketch. The combination of the original and new queries forms a large set of potential queries for a content-based video retrieval system. Through Bayesian relevance feedback, the user narrows the choices to an exemplar set. This exemplar set, or SVTs, represents personalized views of a concept and an effective set of queries to retrieve a general category of images and videos. We have generated SVTs for several classes of videos, including sunsets, high jumpers, and slalom skiers. Our experiments show that the user can quickly converge upon SVTs with optimal performance, achieving over 85% of the precision from icons chosen by exhaustive search.


Archive | 1998

Algorithms and system for object-oriented content-based video search

Shih-Fu Chang; William Chen; Horace J. Meng; Hari Sundaram; Di Zhong


Journal of Social Psychology | 1936

Retention of the Effect of Oral Propaganda

William Chen


international conference on image processing | 2001

VISMap: an interactive image/video retrieval system using visualization and concept maps

William Chen; Shih-Fu Chang


Archive | 1999

METHOD AND SYSTEM FOR GENERATING SEMANTIC VISUAL TEMPLATES FOR IMAGE AND VIDEO RETRIEVAL

Shih-Fu Chang; William Chen; Hari Sundaram

Collaboration


Dive into the William Chen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hari Sundaram

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge