Caroline Jay
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Caroline Jay.
ACM Transactions on Computer-Human Interaction | 2007
Caroline Jay; Mashhuda Glencross; Roger J. Hubbold
Collaborative virtual environments (CVEs) enable two or more people, separated in the real world, to share the same virtual “space.” They can be used for many purposes, from teleconferencing to training people to perform assembly tasks. Unfortunately, the effectiveness of CVEs is compromised by one major problem: the delay that exists in the networks linking users together. Whilst we have a good understanding, especially in the visual modality, of how users are affected by delayed feedback from their own actions, little research has systematically examined how users are affected by delayed feedback from other people, particularly in environments that support haptic (force) feedback. The current study addresses this issue by quantifying how increasing levels of latency affect visual and haptic feedback in a collaborative target acquisition task. Our results demonstrate that haptic feedback in particular is very sensitive to low levels of delay. Whilst latency affects visual feedback from 50 ms, it impacts on haptic task performance 25 ms earlier, and causes the haptic measures of performance deterioration to rise far more steeply than visual. The “impact-perceive-adapt” model of user performance, which considers the interaction between performance measures, perception of latency, and the breakdown of perception of immediate causality, is proposed as an explanation for the observed pattern of performance.
Universal Access in The Information Society | 2007
Caroline Jay; Robert Stevens; Mashhuda Glencross; Alan Chalmers; Cathy Yang
It is well known that many Web pages are difficult to use for visually disabled people. Without access to a rich visual display, the intended structure and organisation of the page is obscured. To fully understand what is missing from the experience of visually disabled users, it is pertinent to ask how the presentation of Web pages on a standard display makes them easier for sighted people to use. This paper reports on an exploratory eye tracking study that addresses this issue by investigating how sighted readers use the presentation of the BBC News Web page to search for a link. The standard page presentation is compared with a “text-only” version, demonstrating both qualitatively and quantitatively that the removal of the intended presentation alters “reading” behaviours. The demonstration that the presentation of information assists task completion suggests that it should be re-introduced to non-visual presentations if the Web is to become more accessible. The conducted study also explored the extent to which algorithms that generate maps of what is perceptually salient on a page match the gaze data recorded in the eye tracking study. The correspondence between a page’s presentation, knowledge of what is visually salient, and how people use these features to complete a task might offer an opportunity to re-model a Web page to maximise access to its most important parts.
ieee virtual reality conference | 2007
Mashhuda Glencross; Caroline Jay; Jeff Feasel; Luv Kohli; Roger J. Hubbold
We present a system that enables, for the first time, effective transatlantic cooperative haptic manipulation of objects whose motion is computed using a physically-based model. We propose a technique for maintaining synchrony between simulations in a peer-to-peer system, while providing responsive direct manipulation for all users. The effectiveness of this approach is determined through extensive user trials involving concurrent haptic manipulation of a shared object. A CAD assembly task, using physically-based motion simulation and haptic feedback, was carried out between the USA and the UK with network latencies in the order of 120ms. We compare the effects of latency on synchrony between peers over the Internet with a low latency (0.5ms) local area network. Both quantitatively and qualitatively, when using our technique, the performance achieved over the Internet is comparable to that on a LAN. As such, this technique constitutes a significant step forward for distributed haptic collaboration
Presence: Teleoperators & Virtual Environments | 2003
Caroline Jay; Roger J. Hubbold
The head-mounted display (HMD) is a popular form of virtual display due to its ability to immerse users visually in virtual environments (VEs). Unfortunately, the users virtual experience is compromised by the narrow field of view (FOV) it affords, which is less than half that of normal human vision. This paper explores a solution to some of the problems caused by the narrow FOV by amplifying the head movement made by the user when wearing an HMD, so that the view direction changes by a greater amount in the virtual world than it does in the real world. Tests conducted on the technique show a significant improvement in performance on a visual search task, and questionnaire data indicate that the altered visual parameters that the user receives may be preferable to those in the baseline condition in which amplification of movement was not implemented. The tests also show that the user cannot interact normally with the VE if corresponding body movements are not amplified to the same degree as head movements, which may limit the implementations versatility. Although not suitable for every application, the technique shows promise, and alterations to aspects of the implementation could extend its use in the future.
international conference on computer graphics and interactive techniques | 2008
Mashhuda Glencross; Gregory J. Ward; Francho Melendez; Caroline Jay; Jun Liu; Roger J. Hubbold
Capturing detailed surface geometry currently requires specialized equipment such as laser range scanners, which despite their high accuracy, leave gaps in the surfaces that must be reconciled with photographic capture for relighting applications. Using only a standard digital camera and a single view, we present a method for recovering models of predominantly diffuse textured surfaces that can be plausibly relit and viewed from any angle under any illumination. Our multiscale shape-from-shading technique uses diffuse-lit/flash-lit image pairs to produce an albedo map and textured height field. Using two lighting conditions enables us to subtract one from the other to estimate albedo. In the absence of a flash-lit image of a surface for which we already have a similar exemplar pair, we approximate both albedo and diffuse shading images using histogram matching. Our depth estimation is based on local visibility. Unlike other depth-from-shading approaches, all operations are performed on the diffuse shading image in image space, and we impose no constant albedo restrictions. An experimental validation shows our method works for a broad range of textured surfaces, and viewers are frequently unable to identify our results as synthetic in a randomized presentation. Furthermore, in side-by-side comparisons, subjects found a rendering of our depth map equally plausible to one generated from a laser range scan. We see this method as a significant advance in acquiring surface detail for texturing using a standard digital camera, with applications in architecture, archaeological reconstruction, games and special effects.
symposium on haptic interfaces for virtual environment and teleoperator systems | 2005
Caroline Jay; Roger J. Hubbold
To make optimal use of distributed virtual environments (DVEs), we must understand and quantify the effects of latency on user performance. The current study investigates whether delaying haptic, and/or visual feedback in a reciprocal tapping task impairs performance or makes the task appear more difficult to the user. Results show haptic latency has a small effect on performance, but is considerably less disruptive than visual lag.
Universal Access in The Information Society | 2012
Andy Brown; Caroline Jay; Alex Q. Chen; Simon Harper
World Wide Web (Web) documents, once delivered in a form that remained constant whilst viewed, are now often dynamic, with sections of a page able to change independently, either automatically or as a result of user interaction. In order to make these updates, and hence their host pages, accessible, it is necessary to detect when the update occurs and how it has changed the page, before determining how, when and what to present to the user. This can only be achieved with an understanding of both the technologies used to achieve dynamic updates and the human factors influencing how people use them. After proposing a user-centred classification of dynamic updates, this paper surveys the current state of technology from two perspectives: that of the developer, and those of visually disabled users. For the former group, the paper introduces some of the technologies that are currently available for implementing dynamic Web pages, before reporting on the results of experiments analysing current and historical Web pages to determine the extent of use of these technologies ‘in the wild’ and the trends in their uptake. The analysis shows that for the most popular 500 sites, JavaScript is used in 93%, Flash in 27% and about one-third (30%) use XMLHttpRequest, a technology used to generate dynamic updates. Uptake of XMLHttpRequest is approximately 2.3% per year across a random selection of 500 sites and is probably higher in the most popular sites. When examining dynamic updates from the perspective of visually disabled users, first an investigation is reported into which technologies (Web Browser and assistive technologies) are currently used by this group in the UK: Internet Explorer and JAWS are clear favourites. Then, the paper describes the results of an experiment, and supporting anecdotal evidence, which suggests that, at best, most users can currently reach updated content, but they must do so manually, and are rarely given any indication that any update has occurred. With technologies enabling dynamic updating of content currently deployed in about 30% of the most popular sites, and increasing annually, action is urgently required if visually disabled users are to be able to use the Web. The paper concludes by discussing some of the issues involved in making these updates accessible.
tests and proofs | 2008
Caroline Jay; Robert Stevens; Roger J. Hubbold; Mashhuda Glencross
Retrieving information presented visually is difficult for visually disabled users. Current accessibility technologies, such as screen readers, fail to convey presentational layout or structure. Information presented in graphs or images is almost impossible to convey through speech alone. In this paper, we present the results of an experimental study investigating the role of touch (haptic) and auditory cues in aiding structure recognition when visual presentation is missing. We hypothesize that by guiding users toward nodes in a graph structure using force fields, users will find it easier to recognize overall structure. Nine participants were asked to explore simple 3D structures containing nodes (spheres or cubes) laid out in various spatial configurations and asked to identify the nodes and draw their overall structure. Various combinations of haptic and auditory feedback were explored. Our results demonstrate that haptic cues significantly helped participants to quickly recognize nodes and structure. Surprisingly, auditory cues alone did not speed up node recognition; however, when they were combined with haptics both node identification and structure recognition significantly improved. This result demonstrates that haptic feedback plays an important role in enabling people to recall spatial layout.
conference on web accessibility | 2013
Aitor Apaolaza; Simon Harper; Caroline Jay
Laboratory studies are a well established practice that present disadvantages in terms of data collection. One of these disadvantages is that laboratories are controlled environments that do not account for unpredicted factors from the real world. Laboratory studies are also obtrusive and therefore possibly biased. The Human-Computer Interaction (HCI) community has acknowledged these problems and has started exploring in-situ observation techniques. These observation techniques allow for bigger participant pools and their environments can conform to the real world. Such real-world observations are particularly important to the accessibility community who has coined the concept accessibility-in-use to differentiate real world from laboratory studies. Real-world observations provide low-level interaction data therefore making a bottom-up analysis possible. This way behaviours emerge from the obtained data instead of looking for predefined models. Some in-situ techniques employ Web logs in which the data is too coarse to infer meaningful user interaction. In some other cases an exhaustive manual modification is required to capture interaction data from a Web application. We describe a tool which is easily deployable in any Web application and captures longitudinal interaction data unobtrusively. It enables the observation of accessibility-in-use and guides the detection of emerging tasks.
human factors in computing systems | 2014
Markel Vigo; Caroline Jay; Robert Stevens
Ontologies have been employed across scientific and business domains for some time, and the proliferation of linked data means the number and range of potential authors is set to increase significantly. Ontologies using the Web Ontology Language (OWL) are complex artefacts, however: the authoring process requires not only knowledge of the application domain, but also skills in programming and logics. To date, there has been no systematic attempt to understand the effectiveness of existing tools, or explore what users really require to build successful ontologies. Here we address this shortfall, presenting insights from an interview study with 15 ontology authors. We identify the problems reported by authors, and the strategies they employ to solve them. We map the data to a set of design recommendations, which describe how tools of the future can support ontology authoring. A key challenge is dealing with information overload: improving the users ability to navigate, populate and debug large ontologies will revolutionise the engineering process, and open ontology authoring up to a new generation of users.