Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Apostol Natsev is active.

Publication


Featured researches published by Apostol Natsev.


Government Information Quarterly | 2012

Social media use by government: From the routine to the critical

Andrea L. Kavanaugh; Edward A. Fox; Steven D. Sheetz; Seungwon Yang; Lin Tzy Li; Donald J. Shoemaker; Apostol Natsev; Lexing Xie

Abstract Social media and online services with user-generated content (e.g., Twitter, Facebook, Flickr, YouTube) have made a staggering amount of information (and misinformation) available. Government officials seek to leverage these resources to improve services and communication with citizens. Significant potential exists to identify issues in real time, so emergency managers can monitor and respond to issues concerning public safety. Yet, the sheer volume of social data streams generates substantial noise that must be filtered in order to detect meaningful patterns and trends. Important events can then be identified as spikes in activity, while event meaning and consequences can be deciphered by tracking changes in content and public sentiment. This paper presents findings from a exploratory study we conducted between June and December 2010 with government officials in Arlington, VA (and the greater National Capitol Region around Washington, D.C.), with the broad goal of understanding social media use by government officials as well as community organizations, businesses, and the public at large. A key objective was also to understand social media use specifically for managing crisis situations from the routine (e.g., traffic, weather crises) to the critical (e.g., earthquakes, floods).


european conference on computer vision | 2012

Scene aligned pooling for complex video recognition

Liangliang Cao; Yadong Mu; Apostol Natsev; Shih-Fu Chang; Gang Hua; John R. Smith

Real-world videos often contain dynamic backgrounds and evolving people activities, especially for those web videos generated by users in unconstrained scenarios. This paper proposes a new visual representation, namely scene aligned pooling, for the task of event recognition in complex videos. Based on the observation that a video clip is often composed with shots of different scenes, the key idea of scene aligned pooling is to decompose any video features into concurrent scene components, and to construct classification models adaptive to different scenes. The experiments on two large scale real-world datasets including the TRECVID Multimedia Event Detection 2011 and the Human Motion Recognition Databases (HMDB) show that our new visual representation can consistently improve various kinds of visual features such as different low-level color and texture features, or middle-level histogram of local descriptors such as SIFT, or space-time interest points, and high level semantic model features, by a significant margin. For example, we improve the-state-of-the-art accuracy on HMDB dataset by 20% in terms of accuracy.


Proceedings of the IEEE | 2012

Multimedia Semantics: Interactions Between Content and Community

Hari Sundaram; Lexing Xie; Munmun De Choudhury; Yu-Ru Lin; Apostol Natsev

This paper reviews the state of the art and some emerging issues in research areas related to pattern analysis and monitoring of web-based social communities. This research area is important for several reasons. First, the presence of near-ubiquitous low-cost computing and communication technologies has enabled people to access and share information at an unprecedented scale. The scale of the data necessitates new research for making sense of such content. Furthermore, popular websites with sophisticated media sharing and notification features allow users to stay in touch with friends and loved ones; these sites also help to form explicit and implicit social groups. These social groups are an important source of information to organize and to manage multimedia data. In this article, we study how media-rich social networks provide additional insight into familiar multimedia research problems, including tagging and video ranking. In particular, we advance the idea that the contextual and social aspects of media are as important for successful multimedia applications as is the media content. We examine the inter-relationship between content and social context through the prism of three key questions. First, how do we extract the context in which social interactions occur? Second, does social interaction provide value to the media object? Finally, how do social media facilitate the repurposing of shared content and engender cultural memes? We present three case studies to examine these questions in detail. In the first case study, we show how to discover structure latent in the social media data, and use the discovered structure to organize Flickr photo streams. In the second case study, we discuss how to determine the interestingness of conversations—and of participants—around videos uploaded to YouTube. Finally, we show how the analysis of visual content, in particular tracing of content remixes, can help us understand the relationship among YouTube participants. For each case, we present an overview of recent work and review the state of the art. We also discuss two emerging issues related to the analysis of social networks—robust data sampling and scalable data analysis.


IEEE Transactions on Multimedia | 2013

Tracking Large-Scale Video Remix in Real-World Events

Lexing Xie; Apostol Natsev; Xuming He; John R. Kender; Matthew L. Hill; John R. Smith

Content sharing networks, such as YouTube, contain traces of both explicit online interactions (such as likes, comments, or subscriptions), as well as latent interactions (such as quoting, or remixing, parts of a video). We propose visual memes, or frequently re-posted short video segments, for detecting and monitoring such latent video interactions at scale. Visual memes are extracted by scalable detection algorithms that we develop, with high accuracy. We further augment visual memes with text, via a statistical model of latent topics. We model content interactions on YouTube with visual memes, defining several measures of influence and building predictive models for meme popularity. Experiments are carried out with over 2 million video shots from more than 40,000 videos on two prominent news events in 2009: the election in Iran and the swine flu epidemic. In these two events, a high percentage of videos contain remixed content, and it is apparent that traditional news media and citizen journalists have different roles in disseminating remixed content. We perform two quantitative evaluations for annotating visual memes and predicting their popularity. The proposed joint statistical model of visual memes and words outperforms an alternative concurrence model, with an average error of 2% for predicting meme volume and 17% for predicting meme lifespan.


international conference on image processing | 2003

Exploring semantic dependencies for scalable concept detection

Apostol Natsev; Milind R. Naphade; John R. Smith

Semantic concept detection from multimedia features enables high-level access to multimedia content. While constructing robust detectors is feasible for concepts with sufficient training samples, concepts with fewer training samples are hard to train efficiently. Comparable performance may be possible if the dependence of these concepts on the ones that can be robustly modeled is exploited. In this paper we show this phenomenon using the TREC Video 2002 Corpus as a test bed. Using a basic set of 12 semantic concepts modeled with support vector machines, we predict presence of 4 other concepts. We then compare the performance of these predictors with direct SVM models for these 4 concepts and observe improvements of up to 150% in average precision.


international conference on multimedia and expo | 2002

A study of image retrieval by anchoring

Apostol Natsev; John R. Smith

Anchoring is a technique for representing objects by their distances to a few well chosen anchors, or vantage points. It can be used in content-based image retrieval for computing image similarity as a function of distances to a fixed set of representative images. Since the number of anchors is usually small, this leads to a reduced dimensionality for similarity searching, enables efficient indexing, and avoids potentially expensive similarity computations in the original feature domain, while guaranteeing lack of false dismissals. Anchoring is therefore surprisingly simple, yet effective, and flavors of it have seen application in speech recognition, audio classification, protein homology detection, and shape matching. In this paper, we describe the anchoring technique in some detail and study its properties, both from an empirical and an analytical standpoint. In particular, we investigate issues in baseline distance selection, anchor selection, and number of anchors. We compare different approaches and evaluate performance of different parameter settings. We also propose two new anchor selection heuristics which may overcome some of the drawbacks of the currently used greedy selection methods.


international conference on multimedia and expo | 2004

Multi-granular detection of regional semantic concepts [video annotation]

Milind R. Naphade; Apostol Natsev; Ching-Yung Lin; John R. Smith

A large number of interesting visual semantic concepts occur at a sub-frame granularity in images and occupy one or more regions at the sub-frame level. Detecting these concepts is a challenge due to segmentation imperfections. We propose multi-granular detection of visual concepts that have regional support. We build a single set of support vector machine based binary concept models from the training set with manually marked up regions. In this paper, we show that detection can be significantly improved by scoring these models over multiple granularities in the test set images, where the regions are automatically detected as a preprocessing step in detection. Using 27 regional semantic concepts from the NIST TRECVID 2003 common annotation lexicon and the corpus, we demonstrate that multi-granular detection leads to improvement in detection.


data compression conference | 1997

Text compression via alphabet re-representation

Philip M. Long; Apostol Natsev; Jeffrey Scott Vitter

We consider re-representing the alphabet so that a representation of a character reflects its properties as a predictor of future text. This enables us to use an estimator from a restricted class to map contexts to predictions of upcoming characters. We describe an algorithm that uses this idea in conjunction with neural networks. The performance of this implementation is compared to other compression methods, such as UNIX compress, gzip, PPMC, and an alternative neural network approach.


knowledge discovery and data mining | 2018

Collaborative Deep Metric Learning for Video Understanding

Joonseok Lee; Sami Abu-El-Haija; Balakrishnan Varadarajan; Apostol Natsev

The goal of video understanding is to develop algorithms that enable machines understand videos at the level of human experts. Researchers have tackled various domains including video classification, search, personalized recommendation, and more. However, there is a research gap in combining these domains in one unified learning framework. Towards that, we propose a deep network that embeds videos using their audio-visual content, onto a metric space which preserves video-to-video relationships. Then, we use the trained embedding network to tackle various domains including video classification and recommendation, showing significant improvements over state-of-the-art baselines. The proposed approach is highly scalable to deploy on large-scale video sharing platforms like YouTube.


very large data bases | 2001

Supporting Incremental Join Queries on Ranked Inputs

Apostol Natsev; Yuan-Chi Chang; John R. Smith; Chung-Sheng Li; Jeffrey Scott Vitter

Collaboration


Dive into the Apostol Natsev's collaboration.

Researchain Logo
Decentralizing Knowledge