Sorin Vasile Sav
Dublin City University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sorin Vasile Sav.
conference on image and video retrieval | 2006
Sorin Vasile Sav; Gareth J. F. Jones; Hyowon Lee; Noel E. O'Connor; Alan F. Smeaton
Object-based retrieval is a modality for video retrieval based on segmenting objects from video and allowing end-users to use these objects as part of querying. In this paper we describe an empirical TRECVid-like evaluation of object-based search, and compare it with a standard image-based search into an interactive experiment with 24 search topics and 16 users each performing 12 search tasks on 50 hours of rushes video. This experiment attempts to measure the impact of object-based search on a corpus of video where textual annotation is not available.
workshop on image analysis for multimedia interactive services | 2003
Noel E. O'Connor; Tomasz Adamek; Sorin Vasile Sav; Noel Murphy; Seán Marlow
In this paper we present an overview of an ongoing collaborative project in the field of video object segmentation and tracking. The objective of the project is to develop a flexible modular software architecture that can be used as test-bed for segmentation algorithms. The background to the project is described, as is the first version of the software system itself. Some sample results for the first segmentation algorithm developed using the system are presented and directions for future work are discussed.
international conference on acoustics, speech, and signal processing | 2002
Janko Calic; Sorin Vasile Sav; Ebroul Izquierdo; Seán Marlow; Noel Murphy; Noel E. O'Connor
The extensive amount of media coverage today, generates difficulties in identifying and selecting desired information. Browsing and retrieval systems become more and more necessary in order to support users with powerful and easy-to-use tools for searching, browsing and summarization of information content. The starting point for these tasks in video browsing and retrieval systems is the low level analysis of video content, especially the segmentation of video content into shots. This paper presents a fast and efficient way to detect shot changes using only the temporal distribution of macroblock types in MPEG compressed video. The notion of a dominant reference frame is introduced here. A dominant frame denotes the reference frame (I or P) used as prediction reference for most of the macroblocks from a subsequent B frame.
international symposium on multimedia | 2009
Paul Ferguson; Cathal Gurrin; Hyowon Lee; Sorin Vasile Sav; Alan F. Smeaton; Noel E. O'Connor; Yoon-Hee Choi; Hee-seon Park
In this paper we describe how content-based analysis techniques can be used to provide much greater functionality to the users of an interactive TV (iTV) device. We describe several content-based multimedia analysis techniques and how some of these can be exploited in the iTV domain, resulting in the provision of a set of powerful functions for iTV users. To validate our ideas, we introduce an iTV application we developed which incorporates some of these techniques into a simple set of user features, in order to demonstrate the usefulness of content-based techniques for iTV. The contribution of this paper is not to provide an in-depth discussion on each of the individual content-based techniques, but rather to show how many of these powerful technologies can be incorporated into an interactive TV system.
IEEE MultiMedia | 2010
Marcin Grzegorzek; Sorin Vasile Sav; Noel E. O'Connor; Ebroul Izquierdo
This article presents a system for texture based probabilistic classification and localization of 3D objects in 2D digital images and discusses selected applications.
international conference on image analysis and recognition | 2008
Michael Blighe; Sorin Vasile Sav; Hyowon Lee; Noel E. O'Connor
We describe the prototype of an interactive, web-based, museum artifact search and information service. Mo Musaem Fioruil clusters and indexes images of museum artifacts taken by visitors to the museum where the images are captured using a passive capture device such as Microsofts SenseCam [1]. The system also matches clustered artifacts to images of the same artifact from the museums official photo collection and allows the user to view images of the same artifact taken by other visitors to the museum. This matching process potentially allows the system to provide more detailed information about a particular artifact to the user based on their inferred preferences, thereby greatly enhancing the users overall museum experience. In this work, we introduce the system and describe, in broad terms, its overall functionality and use. Using different image sets of artificial museum objects, we also describe experiments and results carried out in relation to the artifact matching component of the system.
Proceedings of SPIE, the International Society for Optical Engineering | 2005
Sorin Vasile Sav; Hyowon Lee; Alan F. Smeaton; Noel E. O'Connor; Noel Murphy
Video retrieval is mostly based on using text from dialogue and this remains the most significant component, despite progress in other aspects. One problem with this is when a searcher wants to locate video based on what is appearing in the video rather than what is being spoken about. Alternatives such as automatically-detected features and image-based keyframe matching can be used, though these still need further improvement in quality. One other modality for video retrieval is based on segmenting objects from video and allowing endusers to use these as part of querying. This uses similarity between query objects and objects from video, and in theory allows retrieval based on what is actually appearing on-screen. The main hurdles to greater use of this are the overhead of object segmentation on large amounts of video and the issue of whether we can actually achieve effective object-based retrieval. We describe a system to support object-based video retrieval where a user selects example video objects as part of the query. During a search a user builds up a set of these which are matched against objects previously segmented from a video library. This match is based on MPEG-7 Dominant Colour, Shape Compaction and Texture Browsing descriptors. We use a user-driven semi-automated segmentation process to segment the video archive which is very accurate and is faster than conventional video annotation.
content based multimedia indexing | 2008
James Carmichael; Martha Larson; Jennifer Marlow; Eamonn Newman; Paul D. Clough; Johan Oomen; Sorin Vasile Sav
This paper describes a multimedia multimodal information access sub-system (MIAS) for digital audio-visual documents, typically presented in streaming media format. The system is designed to provide both professional and general users with entry points into video documents that are relevant to their information needs. In this work, we focus on the information needs of multimedia specialists at a Dutch cultural heritage institution with a large multimedia archive. A quantitative and qualitative assessment is made of the efficiency of search operations using our multimodal system and it is demonstrated that MIAS significantly facilitates information retrieval operations when searching within a video document.
european conference on information retrieval | 2006
Alan F. Smeaton; Gareth J. F. Jones; Hyowon Lee; Noel E. O'Connor; Sorin Vasile Sav
Recent years have seen the development of different modalities for video retrieval. The most common of these are (1) to use text from speech recognition or closed captions, (2) to match keyframes using image retrieval techniques like colour and texture [6] and (3) to use semantic features like “indoor”, “outdoor” or “persons”. Of these, text-based retrieval is the most mature and useful, while image-based retrieval using low-level image features usually depends on matching keyframes rather than whole-shots. Automatic detection of video concepts is receiving much attention and as progress is made in this area we will see consequent impact on the quality of video retrieval. In practice it is the combination of these techniques which realises the most useful, and effective, video retrieval as shown by us repeatedly in TRECVid [5].
advanced concepts for intelligent vision systems | 2005
Sorin Vasile Sav; Hyowon Lee; Noel E. O’Connor; Alan F. Smeaton
In this paper we present an interactive, object-based video retrieval system which features a novel query formulation method that is used to iteratively refine an underlying model of the search object. As the user continues query composition and browsing of retrieval results, the system’s object modeling process, based on Gaussian probability distributions, becomes incrementally more accurate, leading to better search results. To make the interactive process understandable and easy to use, a custom user-interface has been designed and implemented that allows the user to interact with segmented objects in formulating a query, in browsing a search result, and in re-formulating a query by selecting an object in the search result.