Philip J. McParlane
University of Glasgow
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philip J. McParlane.
international conference on multimedia retrieval | 2015
Soumyadeb Chowdhury; Philip J. McParlane; Md. Sadek Ferdous; Joemon M. Jose
Lifelogging devices, which seamlessly gather various data about a user as they go about their daily life, have resulted in users amassing large collections of noisy photographs (e.g. visual duplicates, image blur), which are difficult to navigate, especially if they want to review their day in photographs. Social media websites, such as Facebook, have faced a similar information overload problem for which a number of summarization methods have been proposed (e.g. news story clustering, comment ranking etc.). In particular, Facebooks Year in Review received much user interest where the objective for the model was to identify key moments in a users year, offering an automatic visual summary based on their uploaded content. In this paper, we follow this notion by automatically creating a review of a users day using lifelogging images. Specifically, we address the quality issues faced by the photographs taken on lifelogging devices and attempt to create visual summaries by promoting visual and temporal-spatial diversity in the top ranks. Conducting two crowdsourced evaluations based on 9k images, we show the merits of combining time, location and visual appearance for summarization purposes.
international acm sigir conference on research and development in information retrieval | 2013
Philip J. McParlane; Yashar Moshfeghi; Joemon M. Jose
Image tagging is a growing application on social media websites, however, the performance of many auto-tagging methods are often poor. Recent work has exploited an images context (e.g. time and location) in the tag recommendation process, where tags which co-occur highly within a given time interval or geographical area are promoted. These models, however, fail to address how and when different image contexts can be combined. In this paper, we propose a weighted tag recommendation model, building on an existing state-of-the-art, which varies the importance of time and location in the recommendation process, based on a given set of input tags. By retrieving more temporally and geographically relevant tags, we achieve statistically significant improvements to recommendation accuracy when testing on 519k images collected from Flickr. The result of this paper is an important step towards more effective image annotation and retrieval systems.
conference on multimedia modeling | 2014
Philip J. McParlane; Yashar Moshfeghi; Joemon M. Jose
This paper highlights a number of problems which exist in the evaluation of existing image annotation and tag recommendation methods. Crucially, the collections used by these state-of-the-art methods contain a number of biases which may be exploited or detrimental to their evaluation, resulting in misleading results. In total we highlight seven issues for three popular annotation evaluation collections, i.e. Corel5k, ESP Game and IAPR, as well as three issues with collections used in two state-of-the-art photo tag recommendation methods. The result of this paper is two freely available Flickr image collections designed for the fair evaluation of image annotation and tag recommendation methods called Flickr-AIA and Flickr-PTR respectively. We show through experimentation and demonstration that these collection are ultimately fairer benchmarks than existing collections.
european conference on information retrieval | 2013
Philip J. McParlane; Joemon M. Jose
Existing automatic image annotation (AIA) models that depend solely on low-level image features often produce poor results, particularly when annotating real-life collections. Tag co-occurrence has been shown to improve image annotation by identifying additional keywords associated with user-provided keywords. However, existing approaches have treated tag co-occurrence as a static measure over time, thereby ignoring the temporal trends of many tags. The temporal distribution of tags, however, caused by events, seasons, memes, etc. provide a strong source of evidence beyond keywords for AIA. In this paper we propose a temporal tag co-occurrence approach to improve upon the current state-of-the-art automatic image annotation model. By replacing the annotated tags with more temporally significant tags, we achieve statistically significant increases to annotation accuracy on a real-life timestamped image collection from Flickr.
conference on multimedia modeling | 2013
Philip J. McParlane; Stewart Whiting; Joemon M. Jose
Existing automatic image annotation (AIA) systems that depend solely on low-level image features often produce poor results, particularly when annotating real-life collections. Tag co-occurrence has been shown to improve image annotation by identifying additional keywords associated with user-provided keywords. However, existing approaches have treated tag co-occurrence as a static measure over time, thereby ignoring the temporal trends of many tags. The temporal distribution of tags, however, caused by events, seasons and memes, etc, provides a strong source of evidence beyond keywords for AIA. In this paper we propose a temporal tag co-occurrence approach to improve AIA accuracy. By segmenting collection tags into multiple co-occurrence matrices, each covering an interval of time, we are able to give precedence to tags which not only co-occur each other, but also have temporal significance. We evaluate our approach on a real-life timestamped image collection from Flickr by performing experiments over a number of temporal interval sizes. Results show statistically significant improvements to annotation accuracy compared to a non-temporal co-occurrence baseline.
international acm sigir conference on research and development in information retrieval | 2014
Philip J. McParlane; Joemon M. Jose
With the rise in popularity of smart phones, taking and sharing photographs has never been more openly accessible. Further, photo sharing websites, such as Flickr, have made the distribution of photographs easy, resulting in an increase of visual content uploaded online. Due to the laborious nature of annotating images, however, a large percentage of these images are unannotated making their organisation and retrieval difficult. Therefore, there has been a recent research focus on the automatic and semi-automatic process of annotating these images. Despite the progress made in this field, however, annotating images automatically based on their visual appearance often results in unsatisfactory suggestions and as a result these models have not been adopted in photo sharing websites. Many methods have therefore looked to exploit new sources of evidence for annotation purposes, such as image context for example. In this demonstration, we instead explore the scenario of annotating images taken at a large scale events where evidences can be extracted from a wealth of online textual resources. Specifically, we present a novel tag recommendation system for images taken at a popular music festival which allows the user to select relevant tags from related Tweets and Wikipedia content, thus reducing the workload involved in the annotation process.
international acm sigir conference on research and development in information retrieval | 2015
Martin Halvey; Philip J. McParlane; Joemon M. Jose; Keith van Rijsbergen; Stefan Rueger; R. Manmatha; Mohan S. Kankanhalli
ICMR was initially started as a workshop on challenges in image retrieval (in Newcastle in 1998 ) and later transformed into the Conference on Image and Video Retrieval (CIVR) series. In 2011 the CIVR and the ACM Workshop on Multimedia Information Retrieval were combined into a single conference that now forms the ICMR series. The 4th ACM International Conference on Multimedia Retrieval took place in Glasgow, Scotland, from 1 – 4 April 2014. This was the largest edition of ICMR to date with approximately 170 attendees from 25 different countries. ICMR is one of the premier scientific conference for multimedia retrieval held worldwide, with the stated mission “to illuminate the state of the art in multimedia retrieval by bringing together researchers and practitioners in the field of multimedia retrieval .” According to the Chinese Computing Federation Conference Ranking (2013), ACM ICMR is the number one multimedia retrieval conference worldwide and the number four conference in the category of multimedia and graphics. Although ICMR is about multimedia retrieval, in a wider sense, it is also about automated multimedia understanding. Much of the work in that area involves the analysis of media on a pixel, voxel, and wavelet level, but it also involves innovative retrieval, visualisation and interaction paradigms utilising the nature of the multimedia — be it video, images, speech, or more abstract (sensor) data. The conference aims to promote intellectual exchanges and interactions among scientists, engineers, students, and multimedia researchers in academia as well as industry through various events, including a keynote talk, oral, special and poster sessions focused on re search challenges and solutions, technical and industrial demonstrations of prototypes, tutorials, research, and an industrial panel. In the remainder of this report we will summarise the events that took place at the 4th ACM ICMR conference.
international conference on multimedia retrieval | 2014
Philip J. McParlane; Yashar Moshfeghi; Joemon M. Jose
conference on information and knowledge management | 2014
Philip J. McParlane; Andrew James McMinn; Joemon M. Jose
international acm sigir conference on research and development in information retrieval | 2014
Philip J. McParlane; Joemon M. Jose