Mashhuda Glencross
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mashhuda Glencross.
ACM Transactions on Computer-Human Interaction | 2007
Caroline Jay; Mashhuda Glencross; Roger J. Hubbold
Collaborative virtual environments (CVEs) enable two or more people, separated in the real world, to share the same virtual “space.” They can be used for many purposes, from teleconferencing to training people to perform assembly tasks. Unfortunately, the effectiveness of CVEs is compromised by one major problem: the delay that exists in the networks linking users together. Whilst we have a good understanding, especially in the visual modality, of how users are affected by delayed feedback from their own actions, little research has systematically examined how users are affected by delayed feedback from other people, particularly in environments that support haptic (force) feedback. The current study addresses this issue by quantifying how increasing levels of latency affect visual and haptic feedback in a collaborative target acquisition task. Our results demonstrate that haptic feedback in particular is very sensitive to low levels of delay. Whilst latency affects visual feedback from 50 ms, it impacts on haptic task performance 25 ms earlier, and causes the haptic measures of performance deterioration to rise far more steeply than visual. The “impact-perceive-adapt” model of user performance, which considers the interaction between performance measures, perception of latency, and the breakdown of perception of immediate causality, is proposed as an explanation for the observed pattern of performance.
IEEE Transactions on Visualization and Computer Graphics | 2006
James Marsh; Mashhuda Glencross; Steve Pettifer; Roger J. Hubbold
Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios nor, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics
Universal Access in The Information Society | 2007
Caroline Jay; Robert Stevens; Mashhuda Glencross; Alan Chalmers; Cathy Yang
It is well known that many Web pages are difficult to use for visually disabled people. Without access to a rich visual display, the intended structure and organisation of the page is obscured. To fully understand what is missing from the experience of visually disabled users, it is pertinent to ask how the presentation of Web pages on a standard display makes them easier for sighted people to use. This paper reports on an exploratory eye tracking study that addresses this issue by investigating how sighted readers use the presentation of the BBC News Web page to search for a link. The standard page presentation is compared with a “text-only” version, demonstrating both qualitatively and quantitatively that the removal of the intended presentation alters “reading” behaviours. The demonstration that the presentation of information assists task completion suggests that it should be re-introduced to non-visual presentations if the Web is to become more accessible. The conducted study also explored the extent to which algorithms that generate maps of what is perceptually salient on a page match the gaze data recorded in the eye tracking study. The correspondence between a page’s presentation, knowledge of what is visually salient, and how people use these features to complete a task might offer an opportunity to re-model a Web page to maximise access to its most important parts.
ieee virtual reality conference | 2007
Mashhuda Glencross; Caroline Jay; Jeff Feasel; Luv Kohli; Roger J. Hubbold
We present a system that enables, for the first time, effective transatlantic cooperative haptic manipulation of objects whose motion is computed using a physically-based model. We propose a technique for maintaining synchrony between simulations in a peer-to-peer system, while providing responsive direct manipulation for all users. The effectiveness of this approach is determined through extensive user trials involving concurrent haptic manipulation of a shared object. A CAD assembly task, using physically-based motion simulation and haptic feedback, was carried out between the USA and the UK with network latencies in the order of 120ms. We compare the effects of latency on synchrony between peers over the Internet with a low latency (0.5ms) local area network. Both quantitatively and qualitatively, when using our technique, the performance achieved over the Internet is comparable to that on a LAN. As such, this technique constitutes a significant step forward for distributed haptic collaboration
international conference on computer graphics and interactive techniques | 2008
Mashhuda Glencross; Gregory J. Ward; Francho Melendez; Caroline Jay; Jun Liu; Roger J. Hubbold
Capturing detailed surface geometry currently requires specialized equipment such as laser range scanners, which despite their high accuracy, leave gaps in the surfaces that must be reconciled with photographic capture for relighting applications. Using only a standard digital camera and a single view, we present a method for recovering models of predominantly diffuse textured surfaces that can be plausibly relit and viewed from any angle under any illumination. Our multiscale shape-from-shading technique uses diffuse-lit/flash-lit image pairs to produce an albedo map and textured height field. Using two lighting conditions enables us to subtract one from the other to estimate albedo. In the absence of a flash-lit image of a surface for which we already have a similar exemplar pair, we approximate both albedo and diffuse shading images using histogram matching. Our depth estimation is based on local visibility. Unlike other depth-from-shading approaches, all operations are performed on the diffuse shading image in image space, and we impose no constant albedo restrictions. An experimental validation shows our method works for a broad range of textured surfaces, and viewers are frequently unable to identify our results as synthetic in a randomized presentation. Furthermore, in side-by-side comparisons, subjects found a rendering of our depth map equally plausible to one generated from a laser range scan. We see this method as a significant advance in acquiring surface detail for texturing using a standard digital camera, with applications in architecture, archaeological reconstruction, games and special effects.
tests and proofs | 2008
Caroline Jay; Robert Stevens; Roger J. Hubbold; Mashhuda Glencross
Retrieving information presented visually is difficult for visually disabled users. Current accessibility technologies, such as screen readers, fail to convey presentational layout or structure. Information presented in graphs or images is almost impossible to convey through speech alone. In this paper, we present the results of an experimental study investigating the role of touch (haptic) and auditory cues in aiding structure recognition when visual presentation is missing. We hypothesize that by guiding users toward nodes in a graph structure using force fields, users will find it easier to recognize overall structure. Nine participants were asked to explore simple 3D structures containing nodes (spheres or cubes) laid out in various spatial configurations and asked to identify the nodes and draw their overall structure. Various combinations of haptic and auditory feedback were explored. Our results demonstrate that haptic cues significantly helped participants to quickly recognize nodes and structure. Surprisingly, auditory cues alone did not speed up node recognition; however, when they were combined with haptics both node identification and structure recognition significantly improved. This result demonstrates that haptic feedback plays an important role in enabling people to recall spatial layout.
international conference on computer graphics and interactive techniques | 2006
Mashhuda Glencross; Alan Chalmers; Ming C. Lin; Miguel A. Otaduy; Diego Gutierrez
The objective of this course is to provide an introduction to the issues that must be considered when building high-fidelity 3D engaging shared virtual environments. The principles of human perception guide important development of algorithms and techniques in collaboration, graphical, auditory, and haptic rendering. We aim to show how human perception is exploited to achieve realism in high fidelity environments within the constraints of available finite computational resources.In this course we address the challenges faced when building such high-fidelity engaging shared virtual environments, especially those that facilitate collaboration and intuitive interaction. We present real applications in which such high-fidelity is essential. With reference to these, we illustrate the significant need for the combination of high-fidelity graphics in real time, better modes of interaction, and appropriate collaboration strategies.After introducing the concept of high-fidelity virtual environments and why these convey important information to the user, we cover the main issues in two parts linked by the common thread of exploiting human perception. First we explore perceptually driven techniques that can be employed to achieve high-fidelity graphical rendering in real-time, and how incorporating authentic lighting effects helps to convey a sense of realism and scale in virtual re-constructions of historical sites.Secondly, we examine how intuitive interaction between participants, and with objects in the environment, also plays a key role in the overall experience. How perceptual methods can be used to guide interest management and distribution choices, is considered with an emphasis on avoiding potential pitfalls when distributing physically-based simulations. An analysis of real network conditions and the implications of these for distribution strategies that facilitate collaboration is presented. Furthermore, we describe technologies necessary to provide intuitive interaction in virtual environments, paying particular attention to engaging multiple sensory modalities, primarily through physically-based sound simulation and perceptually high-fidelity haptic interaction.The combination of realism and intuitive compelling interaction can lead to engaging virtual environments capable of exhibiting skills transfer, an illusive goal of many virtual environment applications.
symposium on haptic interfaces for virtual environment and teleoperator systems | 2005
Mashhuda Glencross; Roger J. Hubbold; Ben Lyons
In this paper we present a software approach to managing complexity for haptic rendering of large-scale geometric models, consisting of tens to hundreds of thousands of distinct geometric primitives. A secondary client-side caching mechanism, exploiting partitioning, is used to dynamically update geometry within the locality of a user. Results show that the caching mechanism performs well, and that graphical rendering performance becomes an issue before the caching mechanism fails. The performance penalty of the caching technique was found to be dependent on the type of partitioning method employed.
virtual reality continuum and its applications in industry | 2004
James Marsh; Mashhuda Glencross; Steve Pettifer; Roger J. Hubbold; Jonathan Cook; Sylvain Daubrenet
This paper describes a computer aided design tool for mechanical engineering applications, combining component assembly simulation, the modelling of rigid and flexible bodies and haptic interaction in a multi-user distributed virtual environment. It presents the research challenges encountered, and an architecture designed to address these.
applied perception in graphics and visualization | 2009
Gregory J. Ward; Mashhuda Glencross
This paper evaluates a new method for capturing surfaces with variations in albedo, height, and local orientation using a standard digital camera with three flash units. Similar to other approaches, captured areas are assumed to be globally flat and largely diffuse. Fortunately, this encompasses a wide array of interesting surfaces, including most materials found in the built environment, e.g., masonry, fabrics, floor coverings, and textured paints. We present a case study of naïve subjects who found that surfaces captured with our method, when rendered under novel lighting and view conditions, were statistically indistinguishable from photographs. This is a significant improvement over previous methods, to which our results are also compared.