Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suzanne Little is active.

Publication


Featured researches published by Suzanne Little.


european conference on research and advanced technology for digital libraries | 2002

Dynamic Generation of Intelligent Multimedia Presentations through Semantic Inferencing

Suzanne Little; Joost Geurts; Jane Hunter

This paper first proposes a high-level architecture for semiautomatically generating multimedia presentations by combining semantic inferencing with multimedia presentation generation tools. It then describes a system, based on this architecture, which was developed as a service to run over OAI archives - but is applicable to any repositories containing mixed-media resources described using Dublin Core. By applying an iterative sequence of searches across the Dublin Core meta-data, published by the OAI data providers, semantic relationships can be inferred between the mixed-media objects which are retrieved. Using predefined mapping rules, these semantic relationships are then mapped to spatial and temporal relationships between the objects. The spatial and temporal relationships are expressed within SMIL files which can be replayed as multimedia presentations. Our underlying hypothesis is that by using automated computer processing of metadata to organize and combine semantically-related objects within multimedia presentations, the system may be able to generate new knowledge by exposing previously unrecognized connections. In addition, the use of multilayered information-rich multimedia to present the results, enables faster and easier information browsing, analysis, interpretation and deduction by the end-user.


international conference on knowledge capture | 2005

Evaluating the application of semantic inferencing rules to image annotation

Laura Hollink; Suzanne Little; Jane Hunter

Semantic annotation of digital objects within large multimedia collections is a difficult and challenging task. We describe a method for semi-automatic annotation of images and apply it to and evaluate it on images of pancreatic cells. By comparing the performance of this approach in the pancreatic cell domain with previous results in the fuel cell domain, we aim to determine characteristics of a domain which indicate that the method will or will not work in that domain. We conclude by describing the types of images and domains in which we can expect satisfactory results with this approach.


european conference on research and advanced technology for digital libraries | 2001

Building and Indexing a Distributed Multimedia Presentation Archive Using SMIL

Jane Hunter; Suzanne Little

This paper proposes an approach to the problem of generating metadata for composite mixed-media digital objects by appropriately combining and exploiting existing knowledge or metadata associated with the individual atomic components which comprise the composite object. Using a distributed collection of multimedia learning objects, we test this proposal by investigating mechanisms for capturing, indexing, searching and delivering digital online presentations using SMIL (Synchronized Multimedia Integration Language). A set of tools have been developed to automate and streamline the construction and fine-grained indexing of a distributed library of digital multimedia presentation objects by applying SMIL to lecture content from both the University of Qld and Cornell University. Using temporal information which is captured automatically at the time of lecture delivery, the system can automatically synchronize the video of a lecture with the corresponding Powerpoint slides to generate a finely-indexed presentation at minimum cost and effort. This approach enables users to search and retrieve relevant streaming video segments of the lecture based on keyword or free text searches within the slide content. The underlying metadata schema, the metadata processing/generation tools, distributed archive, backend database and the search, browse and playback interfaces which comprise the system are also described in this paper. We believe that the relatively low cost and high speed of development of this apparently sophisticated multimedia archive with rich search capabilities, provides evidence to support the validity of our initial proposal.


international semantic web conference | 2004

Rules-by-example: a novel approach to semantic indexing and querying of images

Suzanne Little; Jane Hunter

Images represent a key source of information in many domains and the ability to exploit them through their discovery, analysis and integration by services and agents on the Semantic Web is a challenging and significant problem. To date the semantic indexing of images has concentrated on applying machine-learning techniques to a set of manually-annotated images in order to automatically label images with keywords. In this paper we propose a new hybrid, user-assisted approach, Rules-By-Example (RBE), which is based on a combination of RuleML and Query-By-Example. Our RBE user interface enables domain-experts to graphically define domain-specific rules that can infer high-level semantic descriptions of images from combinations of low-level visual features (e.g., color, texture, shape, size of regions) which have been specified through examples. Using these rules, the system is able to analyze the visual features of any given image from this domain and generate semantically meaningful labels, using terms defined in the domain-specific ontology. We believe that this approach, in combination with traditional solutions, will enable faster, more flexible, cost-effective and accurate semantic indexing of images and hence maximize their potential for discovery, re-use, integration and processing by Semantic Web services, tools and agents.


International Journal of Web Engineering and Technology | 2005

A framework to enable the semantic inferencing and querying of multimedia content

Jane Hunter; Suzanne Little

Cultural institutions, broadcasting companies, academic, scientific and defence organisations are producing vast quantities of digital multimedia content. With this growth in audiovisual material comes the need for standardised representations encapsulating the rich semantic meaning required to enable the automatic filtering, machine processing, interpretation and assimilation of multimedia resources. Additionally generating high-level descriptions is difficult and manual creation is expensive although significant progress has been made in recent years on automatic segmentation and low-level feature recognition for multimedia. Within this paper we describe the application of semantic web technologies to enable the generation of high-level, domain-specific, semantic descriptions of multimedia content from low-level, automatically-extracted features. By applying the knowledge reasoning capabilities provided by ontologies and inferencing rules to large, multimedia data sets generated by scientific research communities, we hope to expedite solutions to the complex scientific problems they face.


international conference on multimedia retrieval | 2013

An information retrieval approach to identifying infrequent events in surveillance video

Suzanne Little; Iveel Jargalsaikhan; Kathy Clawson; Marcos Nieto; Hao Li; Cem Direkoglu; Noel E. O'Connor; Alan F. Smeaton; Bryan W. Scotney; Hui Wang; Jun Liu

This paper presents work on integrating multiple computer vision-based approaches to surveillance video analysis to support user retrieval of video segments showing human activities. Applied computer vision using real-world surveillance video data is an extremely challenging research problem, independently of any information retrieval (IR) issues. Here we describe the issues faced in developing both generic and specific analysis tools and how they were integrated for use in the new TRECVid interactive surveillance event detection task. We present an interaction paradigm and discuss the outcomes from face-to-face end user trials and the resulting feedback on the system from both professionals, who manage surveillance video, and computer vision or machine learning experts. We propose an information retrieval approach to finding events in surveillance video rather than solely relying on traditional annotation using specifically trained classifiers.


international conference on image processing | 2016

Holistic features for real-time crowd behaviour anomaly detection

Mark Marsden; Kevin McGuinness; Suzanne Little; Noel E. O'Connor

This paper presents a new approach to crowd behaviour anomaly detection that uses a set of efficiently computed, easily interpretable, scene-level holistic features. This low-dimensional descriptor combines two features from the literature: crowd collectiveness and crowd conflict, with two newly developed crowd features: mean motion speed and a new formulation of crowd density. Two different anomaly detection approaches are investigated using these features. When only normal training data is available we use a Gaussian Mixture Model (GMM) for outlier detection. When both normal and abnormal training data is available we use a Support Vector Machine (SVM) for binary classification. We evaluate on two crowd behaviour anomaly detection datasets, achieving both state-of-the-art classification performance on the violent-flows dataset as well as better than real-time processing performance (40 frames per second).


international conference on computer vision theory and applications | 2017

Fully convolutional crowd counting on highly congested scenes

Mark Marsden; Kevin McGuinness; Suzanne Little; Noel E. O'Connor

In this paper we advance the state-of-the-art for crowd counting in high density scenes by further exploring the idea of a fully convolutional crowd counting model introduced by (Zhang et al., 2016). Producing an accurate and robust crowd count estimator using computer vision techniques has attracted significant research interest in recent years. Applications for crowd counting systems exist in many diverse areas including city planning, retail, and of course general public safety. Developing a highly generalised counting model that can be deployed in any surveillance scenario with any camera perspective is the key objective for research in this area. Techniques developed in the past have generally performed poorly in highly congested scenes with several thousands of people in frame (Rodriguez et al., 2011). Our approach, influenced by the work of (Zhang et al., 2016), consists of the following contributions: (1) A training set augmentation scheme that minimises redundancy among training samples to improve model generalisation and overall counting performance; (2) a deep, single column, fully convolutional network (FCN) architecture; (3) a multi-scale averaging step during inference. The developed technique can analyse images of any resolution or aspect ratio and achieves state-of-the-art counting performance on the Shanghaitech Part B and UCF CC 50 datasets as well as competitive performance on Shanghaitech Part A.


international conference on image processing | 2013

Action recognition based on sparse motion trajectories

Iveel Jargalsaikhan; Suzanne Little; Cem Direkoglu; Noel E. O'Connor

We present a method that extracts effective features in videos for human action recognition. The proposed method analyses the 3D volumes along the sparse motion trajectories of a set of interest points from the video scene. To represent human actions, we generate a Bag-of-Features (BoF) model based on extracted features, and finally a support vector machine is used to classify human activities. Evaluation shows that the proposed features are discriminative and computationally efficient. Our method achieves state-of-the-art performance with the standard human action recognition benchmarks, namely KTH and Weizmann datasets.


multimedia signal processing | 2009

Conservation of effort in feature selection for image annotation

Suzanne Little; Stefan M. Rüger

This paper describes an evaluation of a number of subsets of features for the purpose of image annotation using a non-parametric density estimation algorithm (described in [1]). By applying some general recommendations from the literature and through evaluating a range of low-level visual feature configurations and subsets, we achieve an improvement in performance, measured by the mean average precision, from 0.2861 to 0.3800. We demonstrate the significant impact that the choice of visual or low-level features can have on an automatic image annotation system. There is often a large set of possible features that may be used and a corresponding large number of variables that can be configured or tuned for each feature in addition to other options for the annotation approach. Judicious and effective selection of features for image annotation is required to achieve the best performance with the least user design effort. We discuss the performance of the chosen feature subsets in comparison with previous results and propose some general recommendations observed from the work so far.

Collaboration


Dive into the Suzanne Little's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jane Hunter

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ovidio Salvetti

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cem Direkoglu

University of Southampton

View shared research outputs
Researchain Logo
Decentralizing Knowledge