Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harry W. Agius is active.

Publication


Featured researches published by Harry W. Agius.


Journal of Visual Communication and Image Representation | 2008

Video summarisation: A conceptual framework and survey of the state of the art

Arthur G. Money; Harry W. Agius

Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users.


Displays | 2009

Analysing user physiological responses for affective video summarisation

Arthur G. Money; Harry W. Agius

Abstract Video summarisation techniques aim to abstract the most significant content from a video stream. This is typically achieved by processing low-level image, audio and text features which are still quite disparate from the high-level semantics that end users identify with (the ‘semantic gap’). Physiological responses are potentially rich indicators of memorable or emotionally engaging video content for a given user. Consequently, we investigate whether they may serve as a suitable basis for a video summarisation technique by analysing a range of user physiological response measures, specifically electro-dermal response (EDR), respiration amplitude (RA), respiration rate (RR), blood volume pulse (BVP) and heart rate (HR), in response to a range of video content in a variety of genres including horror, comedy, drama, sci-fi and action. We present an analysis framework for processing the user responses to specific sub-segments within a video stream based on percent rank value normalisation. The application of the analysis framework reveals that users respond significantly to the most entertaining video sub-segments in a range of content domains. Specifically, horror content seems to elicit significant EDR, RA, RR and BVP responses, and comedy content elicits comparatively lower levels of EDR, but does seem to elicit significant RA, RR, BVP and HR responses. Drama content seems to elicit less significant physiological responses in general, and both sci-fi and action content seem to elicit significant EDR responses. We discuss the implications this may have for future affective video summarisation approaches.


Multimedia Tools and Applications | 2001

Modeling Content for Semantic-Level Querying of Multimedia

Harry W. Agius; Marios C. Angelides

Many semantic content-based models have been developed for modeling video and audio in order to enable information retrieval based on semantic content. The level of querying of the media depends upon the semantic aspects modeled. This paper proposes a semantic content-based model for semantic-level querying that makes full use of the explicit media structure, objects, spatial relationships between objects, events and actions involving objects, temporal relationships between events and actions, and integration between syntactic and semantic information.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2006

An empirical investigation into user navigation of digital video using the VCR-like control set

Chris Crockford; Harry W. Agius

There has been an almost explosive growth in digital video in recent years. The convention for enabling users to navigate digital video is the Video Cassette Recorder-like (VCR-like) control set, which is dictated by the proliferation of media players that embody it, including Windows Media Player and QuickTime. However, there is a dearth of research seeking to understand how users relate to this control set and how useful it actually is in practice. This paper details our empirical investigation of the issue. A digital video navigation system with a VCR-like control set was developed and subsequently used by a large sample of users (n=200), who were required to complete a number of goal-directed navigational tasks. Each users navigational activity was tracked and recorded automatically by the system. Analysis of the navigational data revealed a range of results concerning how the VCR-like control set both enhanced and limited the users ability to locate sequences of interest, including a number of searching and browsing strategies that were exploited by the users.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2010

ELVIS: Entertainment-led video summaries

Arthur G. Money; Harry W. Agius

Video summaries present the user with a condensed and succinct representation of the content of a video stream. Usually this is achieved by attaching degrees of importance to low-level image, audio and text features. However, video content elicits strong and measurable physiological responses in the user, which are potentially rich indicators of what video content is memorable to or emotionally engaging for an individual user. This article proposes a technique that exploits such physiological responses to a given video stream by a given user to produce Entertainment-Led VIdeo Summaries (ELVIS). ELVIS is made up of five analysis phases which correspond to the analyses of five physiological response measures: electro-dermal response (EDR), heart rate (HR), blood volume pulse (BVP), respiration rate (RR), and respiration amplitude (RA). Through these analyses, the temporal locations of the most entertaining video subsegments, as they occur within the video stream as a whole, are automatically identified. The effectiveness of the ELVIS technique is verified through a statistical analysis of data collected during a set of user trials. Our results show that ELVIS is more consistent than RANDOM, EDR, HR, BVP, RR and RA selections in identifying the most entertaining video subsegments for content in the comedy, horror/comedy, and horror genres. Subjective user reports also reveal that ELVIS video summaries are comparatively easy to understand, enjoyable, and informative.


Multimedia Systems | 2007

Closing the content-user gap in MPEG-7: the hanging basket model

Harry W. Agius; Marios C. Angelides

MPEG-7 is the leading standard for multimedia metadata creation, providing a vast array of description tools. Nevertheless, there exists a disparity between the content models and user models that may be created by the standard: metadata for content models is wide ranging and structured, but metadata for user models only support the specification of an extremely limited range of preferences where content metadata cannot be fully exploited. This results in a very incomplete mapping of user models to content models which we term the MPEG-7 content-user gap. We present a hanging basket model that closes the content-user gap by considering user models to be isomorphic to content models and thus enabling both models to be represented through MPEG-7 content description tools, specifically those concerned with events, objects and properties.


Multimedia Systems | 2006

An MPEG-7 scheme for semantic content modelling and filtering of digital video

Marios C. Angelides; Harry W. Agius

Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users.


acm symposium on applied computing | 2004

Modelling and filtering of MPEG-7-compliant meta-data for digital video

Harry W. Agius; Marios C. Angelides

The recent MPEG-7 standard specifies a semi-structured meta-data format for open interoperability of multimedia. However, the standard refrains from specifying how the meta-data is to be used or how meta-data inappropriate to user requirements may be filtered out. Consequently, we propose COSMOS-7, which produces structured MPEG-7-compliant meta-data for digital video and enables content-based hybrid filtering of that meta-data.


Archive | 2014

Handbook of Digital Games

Marios C. Angelides; Harry W. Agius

A thorough discussion of the present and future of digital gamingPeople play digital games for many reasons, from entertainment to professional training, but all games share the same basic characteristics. From those basic parameters, gaming professionals manage to create the enormous variety of games on the market today. The Handbook of Digital Games explores the many considerations and variables involved in game creation, including gaming techniques and tools, game play, and game design and development.A team of recognized gaming experts from around the world shares their thoughts on the different aspects of game creation, providing readers with a deep understanding and insider perspective on the cross-disciplinary aspects of the industry. The fundamentals are discussed, but the emphasis is on emerging theory and technology with topics including:Player experience and immersion, including emotionAutomatic content generation and storytelling techniquesCollaboration and social information exchangeGame aestheticsSimulation of game play and crowdsCollision detectionNetworking issues such as synchronizationThe book also includes retrospective and ontological examinations of gaming, as well as discussions about mobile game play, spatial game structures, and education-centric gaming. In-game advertising, gender stereotyping, and independent game production are also considered. The Handbook of Digital Games is a robust compilation of the latest information across the entire industry, and a major resource for any gaming professional.


Information & Management | 1997

Desktop video conferencing in the organisation

Harry W. Agius; Marios C. Angelides

Abstract While the potential advantages of video conferencing are appealing, the technology has not been implemented in more than a handful of organisations. This is likely to change in the future as video conferencing moves to the desktop. This paper examines the issues surrounding the area and, in particular, attempts to determine the impact of video conferencing on organisations. A framework is provided for understanding desktop video conferencing in the organisational context, and the relative benefits of and problems in the use of desktop video conferencing are discussed. Finally, a number of suggestions are made on how organisations may reconcile the implications of utilising desktop video conferencing technology.

Collaboration


Dive into the Harry W. Agius's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Creed

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

David Prytherch

Birmingham City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Russell Beale

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Shane Walker

Birmingham City University

View shared research outputs
Researchain Logo
Decentralizing Knowledge