Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron J. Quigley is active.

Publication


Featured researches published by Aaron J. Quigley.


human factors in computing systems | 2006

Tabletop sharing of digital photographs for the elderly

Trent Apted; Judy Kay; Aaron J. Quigley

We have recently begun to see hardware support for the tabletop user interface, offering a number of new ways for humans to interact with computers. Tabletops offer great potential for face-to-face social interaction; advances in touch technology and computer graphics provide natural ways to directly manipulate virtual objects, which we can display on the tabletop surface. Such an interface has the potential to benefit a wide range of the population and it is important that we design for usability and learnability with diverse groups of people.This paper describes the design of SharePic -- a multiuser, multi-touch, gestural, collaborative digital photograph sharing application for a tabletop -- and our evaluation with both young adult and elderly user groups. We describe the guidelines we have developed for the design of tabletop interfaces for a range of adult users, including elders, and the user interface we have built based on them. Novel aspects of the interface include a design strongly influenced by the metaphor of physical photographs placed on the table with interaction techniques designed to be easy to learn and easy to remember. In our evaluation, we gave users the final task of creating a digital postcard from a collage of photographs and performed a realistic think-aloud with pairs of novice participants learning together, from a tutorial script.


graph drawing | 2000

FADE: Graph Drawing, Clustering, and Visual Abstraction

Aaron J. Quigley; Peter Eades

A fast algorithm(FADE) for the 2D drawing, geometric clustering and multilevel viewing of large undirected graphs is presented. The algorithm is an extension of the Barnes-Hut hierarchical space decomposition method, which includes edges and multilevel visual abstraction. Compared to the original force directed algorithm, the time overhead is O(e + n log n) where n and e are the numbers of nodes and edges. The improvement is possible since the decomposition tree provides a systematic way to determine the degree of closeness between nodes without explicitly calculating the distance between each node. Different types of regular decomposition trees are introduced. The decomposition tree also represents a hierarchical clustering of the nodes, which improves in a graph theoretic sense as the graph drawing approaches a lower energy state. Finally, the decomposition tree provides a mechanism to view the hierarchical clustering on various levels of abstraction. Larger graphs can be represented more concisely, on a higher level of abstraction, with fewer graphics on screen.


ubiquitous computing | 2009

A taxonomy for and analysis of multi-person-display ecosystems

Lucia Terrenghi; Aaron J. Quigley; Alan Dix

Interactive displays are increasingly being distributed in a broad spectrum of everyday life environments: they have very diverse form factors and portability characteristics, support a variety of interaction techniques, and can be used by a variable number of people. The coupling of multiple displays creates an interactive “ecosystem of displays”. Such an ecosystem is suitable for particular social contexts, which in turn generates novel settings for communication and performance and challenges in ownership. This paper aims at providing a design space that can inform the designers of such ecosystems. To this end, we provide a taxonomy that builds on the size of the ecosystem and on the degree of individual engagement as dimensions. We recognize areas where physical constraints imply certain kinds of social engagement, versus other areas where further work on interaction techniques for coupling displays can open new design spaces.


ubiquitous computing | 2007

MEMENTO: a digital-physical scrapbook for memory sharing

David West; Aaron J. Quigley; Judy Kay

The act of reminiscence is an important element of many interpersonal activities, especially for elders where the therapeutic benefits are well understood. Individuals typically use various objects as memory aids in the act of recalling, sharing and reviewing their memories of life experiences. Through a preliminary user study with elders using a cultural probe, we identified that a common memory aid is a photo album or scrapbook in which items are collected and preserved. In this article, we present and discuss a novel interface to our memento system that can support the creation of scrapbooks that are both digital and physical in form. We then provide an overview of the user’s view of memento and a brief description of its multi-agent architecture. We report on a series of exploratory user studies in which we evaluate the effect and performance of memento and its suitability in supporting memory sharing and dissemination with physical–digital scrapbooks. Taking account of the current technical limitations of memento, our results show a general approval and suitability of our system as an appropriate interaction scheme for the creation of physical–digital items such as scrapbooks.


Information Visualization | 2011

Effective temporal graph layout: a comparative study of animation versus static display methods

Michael Farrugia; Aaron J. Quigley

Graph drawing algorithms have classically addressed the layout of static graphs. However, the need to draw evolving or dynamic graphs has brought into question many of the assumptions, conventions and layout methods designed to date. For example, social scientists studying evolving social networks have created a demand for visual representations of graphs changing over time. Two common approaches to represent temporal information in graphs include animation of the network and use of static snapshots of the network at different points in time. Here, we report on two experiments, one in a laboratory environment and another using an asynchronous remote web-based platform, Mechanical Turk, to compare the efficiency of animated displays versus static displays. Four tasks are studied with each visual representation, where two characterise overview level information presentation, and two characterise micro level analytical tasks. For the tasks studied in these experiments and within the limits of the experimental system, the results of this study indicate that static representations are generally more effective particularly in terms of time performance, when compared to fully animated movie representations of dynamic networks.


advanced visual interfaces | 2012

The cost of display switching: a comparison of mobile, large display and hybrid UI configurations

Umar Rashid; Miguel A. Nacenta; Aaron J. Quigley

Attaching a large external display can help a mobile device user view more content at once. This paper reports on a study investigating how different configurations of input and output across displays affect performance, subjective workload and preferences in map, text and photo search tasks. Experimental results show that a hybrid configuration where visual output is distributed across displays is worst or equivalent to worst in all tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best in text and photo search tasks (tied with a mobile-only configuration). After conducting a detailed analysis of the performance differences across different UI configurations, we give recommendations for the design of distributed user interfaces.


BMC Bioinformatics | 2008

A relation based measure of semantic similarity for Gene Ontology annotations

Brendan Sheehan; Aaron J. Quigley; Benoit Gaudin; Simon Dobson

BackgroundVarious measures of semantic similarity of terms in bio-ontologies such as the Gene Ontology (GO) have been used to compare gene products. Such measures of similarity have been used to annotate uncharacterized gene products and group gene products into functional groups. There are various ways to measure semantic similarity, either using the topological structure of the ontology, the instances (gene products) associated with terms or a mixture of both. We focus on an instance level definition of semantic similarity while using the information contained in the ontology, both in the graphical structure of the ontology and the semantics of relations between terms, to provide constraints on our instance level description.Semantic similarity of terms is extended to annotations by various approaches, either though aggregation operations such as min, max and average or through an extrapolative method. These approaches introduce assumptions about how semantic similarity of terms relates to the semantic similarity of annotations that do not necessarily reflect how terms relate to each other.ResultsWe exploit the semantics of relations in the GO to construct an algorithm called SSA that provides the basis of a framework that naturally extends instance based methods of semantic similarity of terms, such as Resniks measure, to describing annotations and not just terms. Our measure attempts to correctly interpret how terms combine via their relationships in the ontological hierarchy. SSA uses these relationships to identify the most specific common ancestors between terms. We outline the set of cases in which terms can combine and associate partial order constraints with each case that order the specificity of terms. These cases form the basis for the SSA algorithm. The set of associated constraints also provide a set of principles that any improvement on our method should seek to satisfy.ConclusionWe derive a measure of semantic similarity between annotations that exploits all available information without introducing assumptions about the nature of the ontology or data. We preserve the principles underlying instance based methods of semantic similarity of terms at the annotation level. As a result our measure better describes the information contained in annotations associated with gene products and as a result is better suited to characterizing and classifying gene products through their annotations.


human factors in computing systems | 2015

MultiFi: Multi Fidelity Interaction with Displays On and Around the Body

Jens Grubert; Matthias Heinisch; Aaron J. Quigley; Dieter Schmalstieg

Display devices on and around the body such as smartwatches, head-mounted displays or tablets enable users to interact on the go. However, diverging input and output fidelities of these devices can lead to interaction seams that can inhibit efficient mobile interaction, when users employ multiple devices at once. We present MultiFi, an interactive system that combines the strengths of multiple displays and overcomes the seams of mobile interaction with widgets distributed over multiple devices. A comparative user study indicates that combined head-mounted display and smartwatch interfaces can outperform interaction with single wearable devices.


visual analytics science and technology | 2008

Cell phone Mini Challenge: Node-link animation award animating multivariate dynamic social networks

Michael Farrugia; Aaron J. Quigley

This article describes the visualization tool developed for analysing a dynamic social network of phone calls, for the VAST 2008 mini challenge. The tool was designed to highlight temporal changes in the network, by animating different network visual representations. We also explain how animating these network representations, helped to identify key events in the mini challenge problem scenario. Finally, we make some suggestions for future research and development in the area.


intelligent user interfaces | 2014

SpiderEyes: designing attention- and proximity-aware collaborative interfaces for wall-sized displays

Jakub Dostal; Uta Hinrichs; Per Ola Kristensson; Aaron J. Quigley

With the proliferation of large multi-faceted datasets, a critical question is how to design collaborative environments, in which this data can be analysed in an efficient and insightful manner. Exploiting peoples movements and distance to the data display and to collaborators, proxemic interactions can potentially support such scenarios in a fluid and seamless way, supporting both tightly coupled collaboration as well as parallel explorations. In this paper we introduce the concept of collaborative proxemics: enabling groups of people to collaboratively use attention- and proximity-aware applications. To help designers create such applications we have developed SpiderEyes: a system and toolkit for designing attention- and proximity-aware collaborative interfaces for wall-sized displays. SpiderEyes is based on low-cost technology and allows accurate markerless attention-aware tracking of multiple people interacting in front of a display in real-time. We discuss how this toolkit can be applied to design attention- and proximity-aware collaborative scenarios around large wall-sized displays, and how the information visualisation pipeline can be extended to incorporate proxemic interactions.

Collaboration


Dive into the Aaron J. Quigley's collaboration.

Top Co-Authors

Avatar

Paddy Nixon

University College Dublin

View shared research outputs
Top Co-Authors

Avatar

Ross Shannon

University College Dublin

View shared research outputs
Top Co-Authors

Avatar

Hui Shyong Yeo

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uta Hinrichs

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar

Simon Dobson

University of St Andrews

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Rough

University of St Andrews

View shared research outputs
Researchain Logo
Decentralizing Knowledge