Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Zavesky is active.

Publication


Featured researches published by Eric Zavesky.


international conference on image processing | 2008

Cross-domain learning methods for high-level visual concept classification

Wei Jiang; Eric Zavesky; Shih-Fu Chang; Alexander C. Loui

Exploding amounts of multimedia data increasingly require automatic indexing and classification, e.g. training classifiers to produce high-level features, or semantic concepts, chosen to represent image content, like car, person, etc. When changing the applied domain (i.e. from news domain to consumer home videos), the classifiers trained in one domain often perform poorly in the other domain due to changes in feature distributions. Additionally, classifiers trained on the new domain alone may suffer from too few positive training samples. Appropriately adapting data/models from an old domain to help classify data in a new domain is an important issue. In this work, we develop a new cross-domain SVM (CDSVM) algorithm for adapting previously learned support vectors from one domain to help classification in another domain. Better precision is obtained with almost no additional computational cost. Also, we give a comprehensive summary and comparative study of the state- of-the-art SVM-based cross-domain learning methods. Evaluation over the latest large-scale TRECVID benchmark data set shows that our CDSVM method can improve mean average precision over 36 concepts by 7.5%. For further performance gain, we also propose an intuitive selection criterion to determine which cross-domain learning method to use for each concept.


international conference on multimedia and expo | 2007

A Fast, Comprehensive Shot Boundary Determination System

Zhu Liu; David C. Gibbon; Eric Zavesky; Behzad Shahraray; Patrick Haffner

The proposed shot boundary determination (SBD) algorithm contains a set of finite state machine (FSM) based detectors for pure cut, fast dissolve, fade in, fade out, dissolve, and wipe. Support vector machines (SVM) are applied to the cut and dissolve detectors to further boost performance. Our SBD system was highly effective when evaluated in TRECVID 2006 (TREC video retrieval evaluation) and its performance was ranked highest overall.


conference on image and video retrieval | 2008

Visual islands: intuitive browsing of visual search results

Eric Zavesky; Shih-Fu Chang; Cheng-Chih Yang

The amount of available digital multimedia has seen exponential growth in recent years. While advances have been made in the indexing and searching of images and videos, less focus has been given to aiding users in the interactive exploration of large datasets. In this paper a new framework, called visual islands, is proposed that reorganizes image query results from an initial search or even a general photo collection using a fast, non-global feature projection to compute 2D display coordinates. A prototype system is implemented and evaluated with three core goals: fast browsing, intuitive display, and non-linear exploration. Using the TRECVID2005[15] dataset, 10 users evaluated the goals over 24 topics. Experiments show that users experience improved comprehensibility and achieve a significant page-level precision improvement with the visual islands framework over traditional paged browsing.


multimedia information retrieval | 2008

CuZero: embracing the frontier of interactive visual search for informed users

Eric Zavesky; Shih-Fu Chang

Users of most visual search systems suffer from two primary sources of frustration. Before a search over this data is executed, a query must be formulated. Traditional keyword search systems offer only passive, non-interactive input, which frustrates users that are unfamiliar with the search topic or the target data set. Additionally, after query formulation, result inspection is often relegated to a tiresome, linear inspection of results bound to a single query. In this paper, we reexamine the struggles that users encounter with existing paradigms and present a solution prototype system, CuZero. CuZero employs a unique query process that allows zero-latency query formulation for an informed human search. Relevant visual concepts discovered from various strategies (lexical mapping, statistical occurrence, and search result mining) are automatically recommended in real time after users enter each single word. CuZero also introduces a new intuitive visualization system that allows users to navigate seamlessly in the concept space at-will and simultaneously while displaying the results corresponding to arbitrary permutations of multiple concepts in real time. The result is the creation of an environment that allows the user to rapidly scan many different query permutations without additional query reformulation. Such a navigation system also allows efficient exploration of different types of queries, such as semantic concepts, visual descriptors, and example content, all within one navigation session as opposed to the repetitive trials used in conventional systems.


Proceedings of the 2nd ACM TRECVid Video Summarization Workshop on | 2008

Brief and high-interest video summary generation: evaluating the AT&T labs rushes summarizations

Zhu Liu; Eric Zavesky; Behzad Shahraray; David C. Gibbon; Andrea Basso

Video summarization is essential for the user to understand the main theme of video sequences in a short period, especially when the volume of the video is huge and the content is highly redundant. In this paper, we present a video summarization system, built for the rushes summarization task in TRECVID 2008. The goal is to create a video excerpt including objects and events in the video with minimum redundancy and duration (up to 2% of the original video). We first segment a video into shots and then apply a multi-stage clustering algorithm to eliminate similar shots. Frame importance values that depend on both the temporal content variation and the spatial image salience are used to select the most interesting video clips as part of the summarization. We test our system with two output configurations - a dynamic playback rate and at the native playback rate - as a tradeoff between ground truth inclusion rate and ease of browsing. TRECVID evaluation results show that our system achieves a good inclusion rate and verify that the created video summarization is easy to understand.


conference on image and video retrieval | 2007

Columbia University's semantic video search engine 2008

Shih-Fu Chang; Lyndon Kennedy; Eric Zavesky

We briefly describe CuVid, Columbia Universitys video search engine, a system that enables semantic multimodal search over video broadcast news collections. The system was developed and first evaluated for the NIST TRECVID 2005 benchmark and later expanded to include a large number (374) of visual concept detectors. Our focus is on comparative studies of pros and cons of search methods built on various individual modalities (keyword, image, near-duplicate, and semantic concept) and combinations, without requiring advanced tools and interfaces for interactive search.


Academic Press Library in Signal Processing | 2014

Chapter 13 - Joint Audio-Visual Processing for Video Copy Detection

Zhu Liu; Eric Zavesky; David C. Gibbon; Behzad Shahraray

Abstract Video copy detection is essential for a spectrum of applications, including video search, monitoring, as well as copyright infringement tracking. With enormous growth in the volume of content available and the need to find it quickly, new applications demand robust and efficient underlying video copy detection algorithms. This chapter reviews the recent progress in this area, specifically, the audio and visual alone methods as well as the joint audio and visual approaches. More detailed coverage is given to a video copy detection system built by the authors at AT&T Labs. The system is composed of audio- and visual-based video copy detection submodules, where a hash-based indexing and search engine is employed for efficient content search. A late audio and visual fusion scheme is adopted for combining the copy detection results from both modalities to achieve more robust and accurate results. This system was evaluated in recent TRECVID large-scale copy detection tasks as well as consumer applications aiding personal library management and product search.


TRECVID | 2005

Columbia University TRECVID-2005 Video Search and High-Level Feature Extraction

Shih-Fu Chang; Winston H. Hsu; Lyndon Kennedy; Lexing Xie; Akira Yanagawa; Eric Zavesky; Dong-Qing Zhang


TRECVID | 2008

Columbia University/VIREO-CityU/IRIT TRECVID2008 High-Level Feature Extraction and Interactive Video Search

Shih-Fu Chang; Junfeng He; Yu-Gang Jiang; Elie El Khoury; Chong-Wah Ngo; Akira Yanagawa; Eric Zavesky


Archive | 2010

System and method for dynamically and interactively searching media data

Shih-Fu Chang; Eric Zavesky

Collaboration


Dive into the Eric Zavesky's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Jiang

Eastman Kodak Company

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge