Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Walter Chang is active.

Publication


Featured researches published by Walter Chang.


computer vision and pattern recognition | 2017

Relationship Proposal Networks

Ji Zhang; Mohamed Elhoseiny; Scott Cohen; Walter Chang; Ahmed M. Elgammal

Image scene understanding requires learning the relationships between objects in the scene. A scene with many objects may have only a few individual interacting objects (e.g., in a party image with many people, only a handful of people might be speaking with each other). To detect all relationships, it would be inefficient to first detect all individual objects and then classify all pairs, not only is the number of all pairs quadratic, but classification requires limited object categories, which is not scalable for real-world images. In this paper we address these challenges by using pairs of related regions in images to train a relationship proposer that at test time produces a manageable number of related regions. We name our model the Relationship Proposal Network (Rel-PN). Like object proposals, our Rel-PN is class-agnostic and thus scalable to an open vocabulary of objects. We demonstrate the ability of our Rel-PN to localize relationships with only a few thousand proposals. We demonstrate its performance on the Visual Genome dataset and compare to other baselines that we designed. We also conduct experiments on a smaller subset of 5,000 images with over 37,000 related regions and show promising results.


conference on information and knowledge management | 2010

Generating advertising keywords from video content

Michael J. Welch; Junghoo Cho; Walter Chang

With the proliferation of online distribution methods for videos, content owners require easier and more effective methods for monetization through advertising. Matching advertisements with related content has a significant impact on the effectiveness of the ads, but current methods for selecting relevant advertising keywords for videos are limited by reliance on manually supplied metadata. In this paper we study the feasibility of using text available from video content to obtain high quality keywords suitable for matching advertisements. In particular, we tap into three sources of text for ad keyword generation: production scripts, closed captioning tracks, and speech-to-text transcripts. We address several challenges associated with using such data. To overcome the high error rates prevalent in automatic speech recognition and the lack of an explicit structure to provide hints about which keywords are most relevant, we use statistical and generative methods to identify dominant terms in the source text. To overcome the sparsity of the data and resulting vocabulary mismatches between source text and the advertisers chosen keywords, these terms are then expanded into a set of related keywords using related term mining methods. Our evaluations present a comprehensive analysis of the relative performance for these methods across a range of videos, including professionally produced films and popular videos from YouTube.


meeting of the association for computational linguistics | 2016

Automatic Annotation of Structured Facts in Images

Mohamed Elhoseiny; Scott Cohen; Walter Chang; Brian L. Price; Ahmed M. Elgammal

Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., ), actions (e.g., ), interactions (e.g., ), and positional information (e.g., ). The collected annotations are in the form of fact-image pairs (e.g., and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms.


Archive | 2006

System and method of efficiently representing and searching directed acyclic graph structures in databases

Walter Chang; Nadia Ghamrawi; Arun Swami


Archive | 2006

System and method of building and using hierarchical knowledge structures

Walter Chang; Nadia Ghamrawi


Archive | 2009

ACCESSING MEDIA DATA USING METADATA REPOSITORY

Walter Chang; Michael J. Welch


Archive | 2009

Conversion of relational databases into triplestores

Walter Chang


Archive | 2008

Network visualization and navigation

Walter Chang; Nathan Sakunkoo


Archive | 2006

System and method of determining and recommending a document control policy for a document

Walter Chang; Larry Masinter


Archive | 2010

Semantic analysis of documents to rank terms

Walter Chang; Nadia Ghamrawi

Collaboration


Dive into the Walter Chang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge