Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew L. Hill is active.

Publication


Featured researches published by Matthew L. Hill.


acm multimedia | 2011

Visual memes in social media: tracking real-world news in YouTube videos

Lexing Xie; Apostol Natsev; John R. Kender; Matthew L. Hill; John R. Smith

We propose visual memes, or frequently reposted short video segments, for tracking large-scale video remix in social media. Visual memes are extracted by novel and highly scalable detection algorithms that we develop, with over 96% precision and 80% recall. We monitor real-world events on YouTube, and we model interactions using a graph model over memes, with people and content as nodes, and meme postings as links. This allows us to define several measures of influence. These abstractions, using more than two million video shots from several large-scale event datasets, enable us to quantify and efficiently extract several important observations: over half of the videos contain re-mixed content, which appears rapidly; video view counts, particularly high ones, are poorly correlated with the virality of content; the influence of traditional news media versus citizen journalists varies from event to event; iconic single images of an event are easily extracted; and content that will have long lifespan can be predicted within a day after it first appears. Visual memes can be applied to a number of social media scenarios: brand monitoring, social buzz tracking, ranking content and users, among others.


IEEE Transactions on Multimedia | 2013

Tracking Large-Scale Video Remix in Real-World Events

Lexing Xie; Apostol Natsev; Xuming He; John R. Kender; Matthew L. Hill; John R. Smith

Content sharing networks, such as YouTube, contain traces of both explicit online interactions (such as likes, comments, or subscriptions), as well as latent interactions (such as quoting, or remixing, parts of a video). We propose visual memes, or frequently re-posted short video segments, for detecting and monitoring such latent video interactions at scale. Visual memes are extracted by scalable detection algorithms that we develop, with high accuracy. We further augment visual memes with text, via a statistical model of latent topics. We model content interactions on YouTube with visual memes, defining several measures of influence and building predictive models for meme popularity. Experiments are carried out with over 2 million video shots from more than 40,000 videos on two prominent news events in 2009: the election in Iran and the swine flu epidemic. In these two events, a high percentage of videos contain remixed content, and it is apparent that traditional news media and citizen journalists have different roles in disseminating remixed content. We perform two quantitative evaluations for annotating visual memes and predicting their popularity. The proposed joint statistical model of visual memes and words outperforms an alternative concurrence model, with an average error of 2% for predicting meme volume and 17% for predicting meme lifespan.


international conference on multimedia and expo | 2012

Video Event Detection Using Temporal Pyramids of Visual Semantics with Kernel Optimization and Model Subspace Boosting

Noel C. F. Codella; Apostol Natsev; Gang Hua; Matthew L. Hill; Liangliang Cao; Leiguang Gong; John R. Smith

In this study, we present a system for video event classification that generates a temporal pyramid of static visual semantics using minimum-value, maximum-value, and average-value aggregation techniques. Kernel optimization and model subspace boosting are then applied to customize the pyramid for each event. SVM models are independently trained for each level in the pyramid using kernel selection according to 3-fold cross-validation. Kernels that both enforce static temporal order and permit temporal alignment are evaluated. Model subspace boosting is used to select the best combination of pyramid levels and aggregation techniques for each event. The NIST TRECVID Multimedia Event Detection (MED) 2011 dataset was used for evaluation. Results demonstrate that kernel optimizations using both temporally static and dynamic kernels together achieves better performance than any one particular method alone. In addition, model sub-space boosting reduces the size of the model by 80%, while maintaining 96% of the performance gain.


international conference on multimedia and expo | 2010

Design and evaluation of an effective and efficient video copy detection system

Apostol Natsev; Matthew L. Hill; John R. Smith

We consider the end-to-end system design and evaluation of an efficient and effective system for video copy detection that bridges the gap between computationally expensive methods and practical applications. We use a compact SIFT-based bag-of-words fingerprint (which we call a SIFTogram), requiring only 1000 bytes per second of video, and show that beyond the descriptor choice, many variables can affect performance. We also consider a complementary color-based descriptor, which contrary to popular recent belief, performs better than SIFTogram on some transforms. We emphasize robustness with respect to the most common transformations on content sharing sites, and report a 99.3% detection rate with 0 false alarms on one such transform category from a standardized evaluation. We perform an evaluation of the system using two TRECVID benchmark datasets, and examine the trade-off between speed and accuracy relative to other TRECVID submissions.


distributed event-based systems | 2008

Event detection in sensor networks for modern oil fields

Matthew L. Hill; Murray Campbell; Yuan-Chi Chang; Vijay S. Iyengar

We report the experience of implementing event detection analytics to monitor and forewarn oil production failures in modern, digitized oil fields. Modern oil fields are equipped with thousands of sensors and gauges to measure various physical and chemical characteristics of oil and gas from underground reservoirs to distribution systems. Data from these massive sensor networks weave a picture depicting the state of oil production and potentially hinting at troubles ahead. Continuous streams of sensor readings can be tapped and fed into analytical algorithms in real time to estimate the likelihood of failure events and generate alerts for possible engineering actions. However, the large amount of main memory required to maintain algorithmic states on cumulative stream data poses challenges to todays web-centric, short-message oriented IT infrastructure. Familiar techniques such as data aggregation, selective sampling and window truncating cannot be applied to some sophisticated algorithms. The paper details our end-to-end solution, points out mismatches with the prevalent transactional web model and suggests new research directions.


conference on image and video retrieval | 2010

The accuracy and value of machine-generated image tags: design and user evaluation of an end-to-end image tagging system

Lexing Xie; Apostol Natsev; Matthew L. Hill; John R. Smith; Alex Phillips

Automated image tagging is a problem of great interest, due to the proliferation of photo sharing services. Researchers have achieved considerable advances in understanding motivations and usage of tags, recognizing relevant tags from image content, and leveraging community input to recommend more tags. In this work we address several important issues in building an end-to-end image tagging application, including tagging vocabulary design, taxonomy-based tag refinement, classifier score calibration for effective tag ranking, and selection of valuable tags, rather than just accurate ones. We surveyed users to quantify tag utility and error tolerance, and use this data in both calibrating scores from automatic classifiers and in taxonomy based tag expansion. We also compute the relative importance among tags based on user input and statistics from Flickr. We present an end-to-end system evaluated on thousands of user-contributed photos using 60 popular tags. We can issue four tags per image with over 80% accuracy, up from 50% baseline performance, and we confirm through a comparative user study that value-ranked tags are preferable to accuracy-ranked tags.


international conference on image processing | 2001

Solarspire: querying temporal solar imagery by content

Matthew L. Hill; Vittorio Castelli; Chung-Sheng Li; Yuan-Chi Chang; Lawrence D. Bergman; John R. Smith; B. J. Thompson

In this paper, we describe a novel content-based retrieval application which permits astrophysicists to search large image sequence archives for solar phenomenon, such as solar flares, based on the spatio-temporal behavior of the solar phenomenon. Specifically, images are preprocessed to identify bright and dark spots based on their relative intensity with respect to their neighboring regions. Temporally persistent objects are then extracted from the collection of spots, and their spatio-temporal behavior represented as intensity and size time series. Users define a query in terms of a model of spatio-temporal behaviors through a Web-based interface. The stored intensity and size time series are searched, and series segments that match the specified specified spatio-temporal behavior are returned. The benchmark results based on 2500 satellite images show that the proposed methodology demonstrated better than 85% accuracy on a solar phenomenon previously identified by astrophysicists.


international conference on multimedia and expo | 2004

Content transcoding middleware for pervasive geospatial intelligence access

Ching-Yung Lin; Apostol Natsev; Belle L. Tseng; Matthew L. Hill; John R. Smith; Chung-Sheng Li

We describe a novel content transcoding middleware for accessing military geospatial intelligence information in real-time. Intelligence information, including maps and location, category and properties of object targets, is adapted for various pervasive devices such as laptop, personal digital assistant (PDA), cellular phone, etc. The middleware is deployed as proxies on the Web using the IBM Websphere Transcoding Publisher (WTP) platform, which facilitates the middleware management. We developed several Java-based plug-ins and Extensible Stylesheet Language (XSL) stylesheets for content transcoding. A prototype has been established and real experiments have demonstrated the effectiveness of this novel middleware.


international conference on image processing | 2013

Large-scale video event classification using dynamic temporal pyramid matching of visual semantics

Noel C. F. Codella; Gang Hua; Liangliang Cao; Michele Merler; Leiguang Gong; Matthew L. Hill; John R. Smith

Video event classification and retrieval has recently emerged as a challenging research topic. In addition to the variation in appearance of visual content and the large scale of the collections to be analyzed, this domain presents new and unique challenges in the modeling of the explicit temporal structure and implicit temporal trends of content within the video events. In this study, we present a technique for video event classification that captures temporal information over semantics using a scalable and efficient modeling scheme. An architecture for partitioning videos into a linear temporal pyramid, using segments of equal length and segments determined by the patterns of the underlying data, is applied over a rich underlying semantic description at the frame level using a taxonomy of nearly 1000 concepts containing 500,000 training images. Forward model selection with data bagging is used to prune the space of temporal features and data for efficiency. The system is implemented in the Hadoop Map-Reduce environment for arbitrary scalability. Our method is applied to the TRECVID Multimedia Event Detection 2012 task. Results demonstrate a significant boost in performance of over 50%, in terms of mean average precision, compared to common max or average pooling, and 17.7% compared to more complex pooling strategies that ignore temporal content.


Proceedings of SPIE | 1999

SolarSPIRE: a content-based retrieval engine for temporal sequences of solar imagery

Lawrence D. Bergman; Vittorio Castelli; Chung-Sheng Li; E. A. Achuthan; Yuan-Chi Chang; Matthew L. Hill; John R. Smith; B. J. Thompson

In this paper, we present an application designed to permit specification of, and search for spatio-temporal phenomenon in image sequences of the solar surface acquired via satellite. The application is designed to permit space scientists to search archives of imagery for well-defined solar phenomenon, including solar flares, search tasks that are not practical if performed manually due to the large data volumes.

Collaboration


Dive into the Matthew L. Hill's collaboration.

Researchain Logo
Decentralizing Knowledge