Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Adistambha is active.

Publication


Featured researches published by Kevin Adistambha.


multimedia signal processing | 2008

Motion classification using Dynamic Time Warping

Kevin Adistambha; Christian Ritz; Ian S. Burnett

Automatic generation of metadata is an important component of multimedia search-by-content systems as it both avoids the need for manual annotation as well as minimising subjective descriptions and human errors. This paper explores the automatic attachment of basic descriptions (or dasiaTagspsila) to human motion held in a motion-capture database on the basis of a dynamic time warping (DTW) approach. The captured motion is held in the Acclaim ASF/AMC format commonly used in game and movie motion capture work and the approach allows for the comparison and classification of motion from different subjects. The work analyses the bone rotations important to a small set of movements and results indicate that only a small set of examples is required to perform reliable motion classification.


international symposium on communications and information technologies | 2007

The MPEG-7 query format: a new standard in progress for multimedia query by content

Kevin Adistambha; Mario Doeller; Ruben Tous; Matthias Gruhne; Masanori Sano; Chrisa Tsinaraki; Stavros Christodoulakis; Kyoungro Yoon; Christian Ritz; Ian S. Burnett

In recent years, the amount of Internet accessible digital audiovisual media files has vastly increased. Therefore the need to describe the media (by way of metadata) has also increased significantly. MPEG-7 (finalized in 2001) provides a comprehensive and rich metadata standard for the description of multimedia content. Unfortunately, a standardized query format does not exist for MPEG-7, or other multimedia metadata. Such a standard would provide for communications between querying clients and databases, supporting cross-modal and cross-media retrieval. The lSO/lEC SC29WG11 committee decided therefore to contribute to this application space by adding such functionality as a new part of the MPEG-7 series of standards. In response to a Call for Proposals, six proposals were submitted. This paper describes the strengths of each proposal as well as the resulting draft standard for the MPEG-7 query format.


Archive | 2010

Symbolic modelling of dynamic human motions

David Stirling; Amir S Hesami; Christian Ritz; Kevin Adistambha; Fazel Naghdy

Numerous psychological studies have shown that humans develop various stylistic patterns of motion behaviour, or dynamic signatures, which can be in general, or in some cases uniquely, associated with an individual. In a broad sense, such motion features provide a basis for non-verbal communication (NVC), or body language, and in more specific circumstances they combine to form a Dynamic Finger Print (DFP) of an individual, such as their gait, or walking pattern. Human gait has been studied scientifically for over a century. Some researchers such as Marey (1880) attached white tape to the limbs of a walker dressed in a black body stocking. Humans are able to derive rich and varied information from the different ways in which people walk and move. This study aims at automating this process. Later Braune and Fischer (1904) used a similar approach to study human motion but instead of attaching white tapes to the limbs of an individual, light rods were attached. Johansson (1973) used MLDs (Moving Light Displays; a method of using markers attached to joints or points of interests) in psychophysical experiments to show that humans can recognize gaits representing different activities such as walking, stair climbing, etc. The Identification of an individual from his/ her biometric information has always been desirable in various applications and a challenge to be achieved. Various methods have been developed in response to this need including fingerprints and pupil identification. Such methods have proved to be partially reliable. Studies in psychology indicate that it is possible to identify an individual through non-verbal gestures and body movements and the way they walk. A new modelling and classification approach for spatiotemporal human motions is proposed, and in particular the walking gait. The movements are obtained through a full body inertial motion capture suit, allowing unconstrained freedom of movements in natural environments. This involves a network of 16 miniature inertial sensors distributed around the body via a suit worn by the individual. Each inertial sensor provides (wirelessly) multiple streams of measurements of its spatial orientation, plus energy related: velocity, acceleration, angular velocity and angular acceleration. These are also subsequently transformed and interpreted as features of a dynamic biomechanical model with 23 degrees of freedom (DOF). This scheme provides an unparalleled array of ground-truth information with which to further model dynamic human motions compared to the traditional optically-based motion capture technologies. Using a subset of the available multidimensional features, several


international conference on multimedia and expo | 2007

MQF: An Xml Based Multimedia Query Format

Kevin Adistambha; Christian Ritz; Ian S. Burnett

MQF is a new XML based format designed to facilitate communication between disparate systems for applications involving multimedia query by content. Currently no standardized protocol exists which are able to provide flexibility in formulation of a query, such as the combination of any multimedia format (image, video, sound) to serve as query terms, combined with very complicated query conditions that can utilize a hierarchy of meta-search engines. In this work, we propose MQF as a flexible solution to serve as a communication format between a client and server for use in content based multimedia searching.


Archive | 2012

Enhancing multimedia search using human motion

Kevin Adistambha; Stephen J. Davis; Christian Ritz; Ian S. Burnett; David Stirling

Over the last few years, there has been an increase in the number of multimedia-enabled devices (e.g. cameras, smartphones, etc.) and that has led to a vast quantity of multimedia content being shared on the Internet. For example, in 2010 thirteen million hours of video uploaded to YouTube (http://www.youtube.com). To usefully navigate this vast amount of information, users currently rely on search engines, social networks and dedicated multimedia websites (such as YouTube) to find relevant content. Efficient search of large collections of multimedia requires metadata that is human-meaningful, but currently multimedia sites generally utilize metadata derived from user-entered tags and descriptions. These are often vague, ambiguous or left blank, which makes search for video content unreliable or misleading. Furthermore, a large majority of videos contain people, and consequently, human movement, which is often not described in the user entered metadata.


international conference on signal processing and communication systems | 2011

Limb-based feature description of human motion

Kevin Adistambha; Stephen J. Davis; Christian Ritz; David Stirling

This paper proposes a novel limb-based technique for semantic description of motion capture data. The goal is to create a motion segmentation and classification technique that is easily extensible by recognizing the actions of a limb instead of the whole body. This provides a highly detailed metadata that can be extended as needed to include additional motion classes by either adding a new limb submotion or by defining a new full-body motion class that combines existing known limb movements. The results of the initial implementation for annotating the leg movements (forward and backward) of walking and running show that such a system is feasible, with annotation accuracy of more than 98%.


Computers & Electrical Engineering | 2010

Efficient multimedia query-by-content from mobile devices

Kevin Adistambha; Stephen J. Davis; Christian Ritz; Ian S. Burnett


Archive | 2007

Query streaming for multimedia query by content from mobile devices

Kevin Adistambha; Stephen J. Davis; Christian Ritz; Ian S. Burnett


Archive | 2005

Embedded lossless audio coding using linear prediction and cascade coding

Kevin Adistambha; Christian Ritz; Jason Lukasiak


international symposium on communications and information technologies | 2012

Toward human motion search using fingerprinting

Kevin Adistambha; Stephen J. Davis; Christian Ritz; David Stirling; Ian S. Burnett

Collaboration


Dive into the Kevin Adistambha's collaboration.

Top Co-Authors

Avatar

Christian Ritz

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Stirling

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Jason Lukasiak

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Amir S Hesami

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Fazel Naghdy

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar

Mario Doeller

Information Technology University

View shared research outputs
Top Co-Authors

Avatar

Ruben Tous

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Chrisa Tsinaraki

Technical University of Crete

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge