Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Emily Moxley is active.

Publication


Featured researches published by Emily Moxley.


IEEE Transactions on Multimedia | 2010

Video Annotation Through Search and Graph Reinforcement Mining

Emily Moxley; Tao Mei; B. S. Manjunath

Unlimited vocabulary annotation of multimedia documents remains elusive despite progress solving the problem in the case of a small, fixed lexicon. Taking advantage of the repetitive nature of modern information and online media databases with independent annotation instances, we present an approach to automatically annotate multimedia documents that uses mining techniques to discover new annotations from similar documents and to filter existing incorrect annotations. The annotation set is not limited to words that have training data or for which models have been created. It is limited only by the words in the collective annotation vocabulary of all the database documents. A graph reinforcement method driven by a particular modality (e.g., visual) is used to determine the contribution of a similar document to the annotation target. The graph supplies possible annotations of a different modality (e.g., text) that can be mined for annotations of the target. Experiments are performed using videos crawled from YouTube. A customized precision-recall metric shows that the annotations obtained using the proposed method are superior to those originally existing for the document. These extended, filtered tags are also superior to a state-of-the-art semi-supervised technique for graph reinforcement learning on the initial user-supplied annotations.


international conference on multimedia and expo | 2008

Automatic video annotation through search and mining

Emily Moxley; Tao Mei; Xian-Sheng Hua; Wei-Ying Ma; B. S. Manjunath

Conventional approaches to video annotation predominantly focus on supervised identification of a limited set of concepts, while unsupervised annotation with infinite vocabulary remains unexplored. This work aims to exploit the overlap in content of news video to automatically annotate by mining similar videos that reinforce, filter, and improve the original annotations. The algorithm employs a two-step process of search followed by mining. Given a query video consisting of visual content and speech-recognized transcripts, similar videos are first ranked in a multimodal search. Then, the transcripts associated with these similar videos are mined to extract keywords for the query. We conducted extensive experiments over the TRECVID 2005 corpus and showed the superiority of the proposed approach to using only the mining process on the original video for annotation. This work represents the first attempt at unsupervised automatic video annotation leveraging overlapping video content.


international conference on multimedia and expo | 2009

Not all tags are created equal: Learning flickr tag semantics for global annotation

Emily Moxley; Jim Kleban; Jiejun Xu; B. S. Manjunath

Large collaborative datasets offer the challenging opportunity of creating systems capable of extracting knowledge in the presence of noisy data. In this work we explore the ability to automatically learn tag semantics by mining a global georeferenced image collection crawled from Flickr with the aim of improving an automatic annotation system. We are able to categorize sets of tags as places, landmarks, and visual descriptors. By organizing our dataset of more than 1.69 million images using a quadtree we can efficiently find geographic areas with sufficient density to provide useful results for place and landmark extraction. Precision-recall curves for our techniques compared with previous existing work used to identify place tags and manual groundtruth landmark annotation show the merit of our methods applied on a world scale.


conference on image and video retrieval | 2009

Global annotation on georeferenced photographs

Jim Kleban; Emily Moxley; Jiejun Xu; B. S. Manjunath

We present an efficient world-scale system for providing automatic annotation on collections of geo-referenced photos. As a user uploads a photograph a place of origin is estimated from visual features which the user can refine. Once the correct location is provided, tags are suggested based on geographic and image similarity retrieved from a large database of 1.2 million images crawled from Flickr. The system effectively mines geographically relevant terms and ranks potential suggestion terms by their posterior probability given observed visual and geocoordinate features. A series of experiments analyzes the geocoordinate prediction accuracy and precision-recall metric of tags suggestions based on information retrieval techniques. The system is novel in that it fuses geographic and visual information to provide annotations for uploaded photographs taken anywhere in the world in a matter of seconds.


Proceedings of the international workshop on TRECVID video summarization | 2007

Feature fusion and redundancy pruning for rush video summarization

Jim Kleban; Anindya Sarkar; Emily Moxley; Stephen Mangiat; Swapna Joshi; Thomas Kuo; B. S. Manjunath

This paper presents a video summarization technique for rushes that employs high-level feature fusion to identify segments for inclusion. It aims to capture distinct video events using a variety of features: k-means based weighting, speech, camera motion, significant differences in HSV color space, and a dynamic time warping (DTW) based feature that suppresses repeated scenes. The feature functions are used to drive a weighted k-means based clustering to identify visually distinct, important segments that constitute the final summary. The optimal weights corresponding to the individual features are obtained using a gradient descent algorithm that maximizes the recall of ground truth events from representative training videos. Analysis reveals a lengthy computation time but high quality results (60% average recall over 42 test videos) as based on manually-judged inclusion ofdistinct shots. The summaries were judged relatively easy to view and had an average amount of redundancy.


multimedia information retrieval | 2008

Spirittagger: a geo-aware tag suggestion tool mined from flickr

Emily Moxley; Jim Kleban; B. S. Manjunath


conference on spatial information theory | 2009

Terabytes of Tobler: evaluating the first law in a massive, domain-neutral representation of world knowledge

Brent J. Hecht; Emily Moxley


Storage and Retrieval for Image and Video Databases | 2008

Video Fingerprinting: Features for Duplicate and Similar Video Detection and Query-based Video Retrieval

Anindya Sarkar; Pratim Ghosh; Emily Moxley; B. S. Manjunath


Archive | 2007

CORTINA: Searching a 10 Million + Images Database

Elisa Drelie Gelasca; Pratim Ghosh; Emily Moxley; Joriz De Guzman; Jiejun Xu; Zhiqiang Bi; Steffen Gauglitz; Amir M. Rahimi; B. S. Manjunath


electronic imaging | 2008

Video fingerprinting: features for duplicate and similar video detection and query-based video retrieval

Anindya Sarkar; Pratim Ghosh; Emily Moxley; B. S. Manjunath

Collaboration


Dive into the Emily Moxley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anindya Sarkar

University of California

View shared research outputs
Top Co-Authors

Avatar

Jiejun Xu

University of California

View shared research outputs
Top Co-Authors

Avatar

Jim Kleban

University of California

View shared research outputs
Top Co-Authors

Avatar

Pratim Ghosh

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Swapna Joshi

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amir M. Rahimi

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge