David Hannah
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David Hannah.
international conference on multimedia and expo | 2009
Ioannis Arapakis; Yashar Moshfeghi; Hideo Joho; Reede Ren; David Hannah; Joemon M. Jose
Over the years, recommender systems have been systematically applied in both industry and academia to assist users in dealing with information overload. One of the factors that determine the performance of a recommender system is user feedback, which has been traditionally communicated through the application of explicit and implicit feedback techniques. In this paper, we propose a novel video search interface that predicts the topical relevance of a video by analysing affective aspects of user behaviour. We, furthermore, present a method for incorporating such affective features into user profiling, to facilitate the generation of meaningful recommendations, of unseen videos. Our experiment shows that multimodal interaction feature is a promising way to improve the performance of recommendation.
information interaction in context | 2008
Hideo Joho; David Hannah; Joemon M. Jose
Search interfaces are mainly designed to support a single searcher at a time. We therefore have a limited understanding of how an interface can support search where more than one searcher concurrently pursues a shared information need. This paper investigated the performance and user behaviour of concurrent search. Based on a recall-oriented search task, a user study was carried out to compare an independent search condition to collaborative search conditions. The results show that the collaborative conditions helped searchers diversify search vocabulary while reducing redundant documents to be bookmarked within teams. However, these effects were found to be insufficient to improve the retrieval effectiveness. We discussed the implications for concurrent search support based on our findings.
Information Processing and Management | 2010
Martin Halvey; David Vallet; David Hannah; Yue Feng; Joemon M. Jose
There are a number of multimedia tasks and environments that can be collaborative in nature and involve contributions from more than one individual. Examples of such tasks include organising photographs or videos from multiple people from a large event, students working together to complete a class project, or artists and/or animators working on a production. Despite this, current state of the art applications that have been created to assist in multimedia search and organisation focus on a single user searching alone and do not take into consideration the collaborative nature of a large number of multimedia tasks. The limited work in collaborative search for multimedia applications has concentrated mostly on synchronous, and quite often co-located, collaboration between persons. However, these collaborative scenarios are not always practical or feasible. In order to overcome these shortcomings we have created an innovative system for online video search, which provides mechanisms for groups of users to collaborate both asynchronously and remotely on video search tasks. In order to evaluate our system an user evaluation was conducted. This evaluation simulated multiple conditions and scenarios for collaboration, varying on awareness, division of labour, sense making and persistence. The outcome of this evaluation demonstrates the benefit and usability of our system for asynchronous and remote collaboration between users. In addition the results of this evaluation provide a comparison between implicit and explicit collaboration in the same search system.
european conference on information retrieval | 2009
Hideo Joho; David Hannah; Joemon M. Jose
This paper revisits some of the established Information Retrieval (IR) techniques to investigate effective collaborative search strategies. We devised eight search strategies that divided labour and shared knowledge in teams using relevance feedback and clustering. We evaluated the performance of strategies with a user simulation enhanced by a query-pooling method. Our results show that relevance feedback is successful at formulating effective collaborative strategies while further effort is needed for clustering. We also measured the extent to which additional members improved the performance and an effect of search progress on the improvement.
european conference on information retrieval | 2009
Martin Halvey; P. Punitha; David Hannah; Robert Villa; Frank Hopfgartner; Anuj Goyal; Joemon M. Jose
In this paper we present a number of methods for re-ranking video search results in order to introduce diversity into the set of search results. The usefulness of these approaches is evaluated in comparison with similarity based measures, for the TRECVID 2007 collection and tasks [11]. For the MAP of the search results we find that some of our approaches perform as well as similarity based methods. We also find that some of these results can improve the P@N values for some of the lower N values. The most successful of these approaches was then implemented in an interactive search system for the TRECVID 2008 interactive search tasks. The responses from the users indicate that they find the more diverse search results extremely useful.
acm/ieee joint conference on digital libraries | 2009
Martin Halvey; David Vallet; David Hannah; Joemon M. Jose
In this paper, we present ViGOR (Video Grouping, Organisation and Retrieval) a video retrieval system that allows users to group videos in order to facilitate video retrieval tasks. In this way users are able to visualise and conceptualise many aspects of their search tasks and carry out a localised search in order to solve a more global search problem. The main objective of this work is to aid users while carrying out explorative video retrieval tasks; these tasks can be often ambiguous and multi-faceted. Two user evaluations were carried out in order to evaluate the usefulness of this grouping paradigm for assisting users. The first evaluation involved users carrying out broad tasks on YouTube, and gave insights into the application of our interface to a vast online video collection. The second evaluation involved users carrying out focused tasks on the TRECVID 2007 video collection, allowing a comparison over a local collection, on which we could extract a number of content-based features. The results of our evaluations show that the use of the ViGOR system results in an increase in user performance and user satisfaction, showing the potential of a grouping paradigm for video search for various tasks in a variety of diverse video collections.
human factors in computing systems | 2012
Graham A. Wilson; David Hannah; Stephen A. Brewster; Martin Halvey
This paper presents initial results from the design and evaluation of one-handed squeezing of a mobile phone: the application of force by each individual digit, and combinations of digits, of one hand as a means of interacting with a mobile device. As part of the evaluation we also consider how to alter the size of the interaction space to best suit the number of digits being used. By identifying which digits can accurately apply force both individually and in combination with others, we can then design one-handed, multi-channel input for mobile interaction. The results suggest that not all digits are equally accurate, and that some are more accurate when used in combination with others. Further, increasing the size of the underlying interaction space to suit the number of digits used improves user performance.
european conference on interactive tv | 2011
David Hannah; Martin Halvey; Graham A. Wilson; Stephen A. Brewster
We investigate the use of a mobile device to provide multifunc-tional input and output for a stereoscopic 3D television (TV) display. Through a number of example applications, we demonstrate how a combination of gestural and haptic input (touch and pressure) can be successfully deployed to allow the user to navigate a complex information space (multimedia and TV content), while at the same time visual and haptic (thermal and vibrotactile) feedback can be used to provide additional information to the user enriching the experience. Finally, we discuss our future work exploring the potential of this idea to allow multi-device and multimodal browsing of 3D TV and multimedia.
international acm sigir conference on research and development in information retrieval | 2008
Hideo Joho; David Hannah; Joemon M. Jose
Generating query-biased summaries can take up a large part of the response time of interactive information retrieval (IIR) systems. This paper proposes to use document titles as an alternative to queries in the generation of summaries. The use of document titles allows us to pre-generate summaries statically, and thus, improve the response speed of IIR systems. Our experiments suggest that title-biased summaries are a promising alternative to query-biased summaries.
acm/ieee joint conference on digital libraries | 2010
Robert Villa; Martin Halvey; Hideo Joho; David Hannah; Joemon M. Jose
Developing methods for searching image databases is a challenging and ongoing area of research. A common approach is to use manual annotations, although generating annotations can be expensive in terms of time and money, and therefore may not be justified in many situations. Content-based search techniques which extract visual features from image data can be used, but users are typically forced to express their information need using example images, or through sketching interfaces. This can be difficult if no visual example of the information need is available, or when the information need cannot be easily drawn. In this paper, we consider an alternative approach which allows a user to search for images through an intermediate database. In this approach, a user can search using text in the intermediate database as a way of finding visual examples of their information need. The visual examples can then be used to search a database that lacks annotations. Three experiments are presented which investigate this process. The first experiment automatically selects the image queries from the intermediary database; the second instead uses images which have been hand-picked by users. A third experiment, an interactive study, is then presented this study compares the intermediary interface to text search, where we consider text as an upper bound of performance. For this last study, an interface which supports the intermediary search process is described. Results show that while performance does not match manual annotations, users are able to find relevant material without requiring collection annotations.