Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Uma Srinivasan is active.

Publication


Featured researches published by Uma Srinivasan.


acm multimedia | 2001

Automatic detection of 'Goal' segments in basketball videos

Surya Nepal; Uma Srinivasan; Graham J. Reynolds

Advances in the media and entertainment industries, for example streaming audio and digital TV, present new challenges for managing large audio-visual collections. Efficient and effective retrieval from large content collections forms an important component of the business models for content holders and this is driving a need for research in audio-visual search and retrieval. Current content management systems support retrieval using low-level features, such as motion, colour, texture, beat and loudness. However, low-level features often have little meaning for the human users of these systems, who much prefer to identify content using high-level semantic descriptions or concepts. This creates a gap between the system and the user that must be bridged for these systems to be used effectively. The research presented in this paper describes our approach to bridging this gap in a specific content domain, sports video. Our approach is based on a number of automatic techniques for feature detection used in combination with heuristic rules determined through manual observations of sports footage. This has led to a set of models for interesting sporting events-goal segments-that have been implemented as part of an information retrieval system. The paper also presents results comparing output of the system against manually identified goals.


acm multimedia | 2000

TV anytime as an application scenario for MPEG-7

Silvia Pfeiffer; Uma Srinivasan

The ISO/MPEG group has identified a wide range of application scenarios [1] for their emerging MPEG-7 standard on audio-visual metadata. TV Anytime with their vision of future digital TV services [2] encompasses a large number of them. As TV Anytime has also identified metadata as one of the key requirements to realize their vision, MPEG-7 is the natural candidate to fill that role. Here, we describe technically how metadata for the TV Anytime scenario can be created using MPEG-7.


Journal of the Association for Information Science and Technology | 2000

Managing heterogeneous information systems through discovery and retrieval of generic concepts

Uma Srinivasan; Anne H. H. Ngu; Tamas Gedeon

Autonomy of operations combined with decentralized management of data gives rise to a number of heterogeneous databases or information systems within an enterprise. These systems are often incompatible in structure as well as content and, hence, difficult to integrate. Depsite heterogeneity, the unity of overall purpose within a common application domain, nevertheless, provides a degree of semantic similarity that manifests itself in the form of similar data structures and common usage patterns of existing information systems. This article introduces a conceptual integration approach that exploits the similarity in metalevel information in existing systems and performs metadata mining on database objects to discover a set of concepts that serve as a domain abstraction and provide a conceptual layer above existing legacy systems. This conceptual layer is further utilized by an information reengineering framework that customizes and packages information to reflect the unique needs of different user groups within the application domain. The architecture of the information reengineering framework is based on an object-oriented model that represents the discovered concepts as customized application objects for each distinct user group.


statistical and scientific database management | 2004

A service oriented architecture for a health research data network

Kerry Taylor; Christine M. O'Keefe; John Colton; Rohan A. Baxter; Ross Sparks; Uma Srinivasan; Mark A. Cameron; Laurent Lefort

This paper reports on an architecture aimed at providing a technology platform for a new research facility, called the Health Research Data Network (HRDN). The two key features - custodial control over access and use of resources; and confidentiality protection integrated into a secure end-to-end system for data sharing and analysis - distinguish HRDN from other service oriented architectures for distributed data sharing and analysis.


Multimedia Tools and Applications | 2005

A Survey of MPEG-1 Audio, Video and Semantic Analysis Techniques

Uma Srinivasan; Silvia Pfeiffer; Surya Nepal; Michael H. Lee; Lifang Gu; Stephen Barrass

Digital audio & video data have become an integral part of multimedia information systems. To reduce storage and bandwidth requirements, they are commonly stored in a compressed format, such as MPEG-1. Increasing amounts of MPEG encoded audio and video documents are available online and in proprietary collections. In order to effectively utilise them, we need tools and techniques to automatically analyse, segment, and classify MPEG video content. Several techniques have been developed both in the audio and visual domain to analyse videos. This paper presents a survey of audio and visual analysis techniques on MPEG-1 encoded media that are useful in supporting a variety of video applications. Although audio and visual feature analyses have been carried out extensively, they become useful to applications only when they convey a semantic meaning of the video content. Therefore, we also present a survey of works that provide semantic analysis on MPEG-1 encoded videos.


Published in <b>2005</b> in Hershey by IRM Press | 2005

Managing Multimedia Semantics

Uma Srinivasan; Surya Nepal

Section 1: Semantic Indexing and Retrieval of Images Section 2: Audio and Video Semantics: Models and Standards Section 3: User-Centric Approach to Manage Semantics Section 4: Managing Distributed Multimedia Section 5: Emergent Semantics


discovery science | 1999

A Multi-Model Framework for Video Information Systems

Uma Srinivasan; Craig A. Lindley; Bill Simpson-Young

In order to develop Video Information Systems (VIS) such as digital video libraries, video-on-demand systems and video synthesis applications, we need to understand the semantics of video data, and have appropriate schemes to store, retrieve and present this data. In this paper, we have presented an integrated multi-model framework for designing VIS application that accommodates semantic representation and supports a variety of forms of content-based retrieval. The framework includes a functional component to represent video and audio analysis functions, a hypermedia component for video delivery and presentation and a data management component to manage multi-modal queries for continuous media. A metamodel is described for representing video semantic at several levelss. Finally we have described a case study — the FRAMES project — which utilises the multimodel framework to develop specific VIS applications.


database and expert systems applications | 1998

Query semantics for content-based retrieval of video data: an empirical investigation

Craig A. Lindley; Uma Srinivasan

To facilitate content-based retrieval of video data, the FRAMES project has developed a comprehensive framework for the representation of video semantics based upon film semiotics. An empirical image interpretation experiment has been devised for investigating this model. The results of the experiment validate distinctions embodied in the model, as well as demonstrating preliminary effects of prompts on the levels of semantics that a non-specialist audience reads into a set of test images. The theoretical framework presented forms the basis of the FRAMES demonstrator for content based retrieval of video and dynamic virtual video synthesis.


Lecture Notes in Computer Science | 1999

Multi-modal Feature-map: An Approach to Represent Digital Video Sequences

Uma Srinivasan; Craig A. Lindley

Video sequences retrieved from a database need to be presented in a compact, meaningful way in order to enable users to understand and visualise the contents presented. In this paper we propose a visual representation that exploits the multi-modal content of video sequences by representing retrieved video sequences with a set of multi-modal feature-maps arranged in a temporal order. The feature-map is a collage represented as a visual icon that shows: the perceptual content such as a key-frame image, the cinematic content such as the type of camera work, some auditory content that represents the type of auditory information present in the sequence, temporal information that shows the duration of the sequence and its offset within the video.


international conference on multimedia and expo | 2003

Adaptive video highlights for wired and wireless platforms

Surya Nepal; Uma Srinivasan

Many new applications such as pervasive healthcare, pervasive infotainment and pervasive training, require relevant video segments (i.e. video highlights) to be delivered to a variety of devices operating under both wired and wireless platforms. This brings in a requirement of (a) way to describe the content as well as the environment such as device and network in which it is to be delivered, and (b) a method to adapt the content for the environment and the application. Maximizing user viewing experiences and minimizing resources required is the main aim of the adaptation process. This involves identifying relevant segments within a video and scaling down the video in different dimensions so that it can be delivered to the end users any time any where on any device. Towards this we have developed a prototype system for delivery of adaptive video (DAVE). In this paper, we explain how DAVE delivers video highlights in both wired and wireless platforms. Our focus in this paper is on content and environment description and content adaptation aspects of DAVE.

Collaboration


Dive into the Uma Srinivasan's collaboration.

Top Co-Authors

Avatar

Surya Nepal

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Bill Simpson-Young

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Craig A. Lindley

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Graham J. Reynolds

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Michael H. Lee

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Silvia Pfeiffer

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christine M. O'Keefe

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

John Colton

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar

Kerry Taylor

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge