Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Gareth Evans is active.

Publication


Featured researches published by David Gareth Evans.


international conference of the ieee engineering in medicine and biology society | 2000

Controlling mouse pointer position using an infrared head-operated joystick

David Gareth Evans; R Drew; P Blenkhorn

This paper describes the motivation for and the design considerations of a low-cost head-operated joystick. The paper briefly summarizes the requirements of head-operated mouse pointer control for people with disabilities before discussing a set of technological approaches that can be used to satisfy these requirements. The paper focuses on the design of a head-operated joystick that uses infrared light emitting diodes (LEDs) and photodetectors to determine head position, which is subsequently converted into signals that emulate a Microsoft mouse. There are two significant findings. The first is that, while nonideal device characteristics might appear to make the joystick difficult to use, users naturally compensate for nonlinearities, in a transparent manner, because of visual feedback of mouse pointer position. The second finding, from relatively informal, independent trials, indicates that disabled users prefer a head-operated device that has the characteristics of a joystick (a relative pointing device) to those of a mouse (an absolute pointing device).


Interacting with Computers | 2006

Personalising web page presentation for older people

Sri Kurniawan; Alasdair King; David Gareth Evans; Paul Blenkhorn

This paper looks at different ways of personalising web page presentation to alleviate functional impairments in older people. The paper considers how impairments may be addressed by web design and through various personalisation instruments: accessibility features of standard browsers, proxy servers, assistive technology, application adaptors, and special purpose browsers. A pilot study of five older web users indicated that the most favoured personalisation technique was overriding the CSS (cascading style sheet) with a readily available one using a standard browser. The least favoured one was using assistive technology. In a follow-up study with 16 older web users, performing goal-directed browsing tasks, overriding CSS remains the most favoured. Assistive technology remains the least favoured and the slowest. Based on user comments, one-take-home message for web personalisation instrument developer is that the best instrument for older persons is one that most faithfully preserves the original layout while requiring the least effort.


Journal of Network and Computer Applications | 1998

Using speech and touch to enable blind people to access schematic diagrams

Paul Blenkhorn; David Gareth Evans

A novel approach for enabling blind people to interact with computer-generated graphical information is presented. The paper discusses how computer-generated, text-based information is presented to blind people and then identifies the difficulties in providing similar access to the range of graphical information presented by computer systems. A computer-based system that allows blind users to read, create and edit one type of schematic diagram, namely data flow diagrams used in software engineering, is presented, together with the mapping from the original diagram to a suitable generic, tactile diagram. Results of the evaluation of the approach are given, as are suggested adaptations of the approach that can present tabular information and time-ordered schematic diagrams to a user.


Disability and Rehabilitation: Assistive Technology | 2007

Use of assistive technology by students with dyslexia in post-secondary education

E.A. Draffan; David Gareth Evans; Paul Blenkhorn

Purpose.u2003To identify the types and mix of technology (hardware and software) provided to post-secondary students with dyslexia under the UKs Disabled Student Allowance (DSA), and to determine the students satisfaction with, and use of, the equipment provided and to examine their experiences with training. Method.u2003A telephone survey of 455 students with dyslexia who had received technology under the DSA from one equipment supplier was conducted over in the period September to December 2005. The survey obtained a mixture of quantitative data (responses to binary questions and selections from a five-point rating scale) and qualitative data (participants identifying positive and negative experiences with technology). In addition, the equipment suppliers database was used to determine the technology supplied to each of the participants. Result.u2003Technology provision is variable between students. The majority of students receive a recording device, text-to-speech software and concept mapping tools in addition to a standard computer system. Ninety percent of participants are satisfied or very satisfied with the hardware and the software that they receive. A total of 48.6% of participants received training, with 86.3% of those expressing satisfaction with the training they received. Of those that were offered training but elected not to receive it, the majority did so because they felt confident about their IT skills. Conclusions.u2003Students express satisfaction not only with the computer systems that they receive but also with the special-purpose software provided to support their studies. Significant numbers of students elect not to receive training and may, therefore, not be using their equipment to its best advantage.


international conference on computers helping people with special needs | 2002

TeDUB: A System for Presenting and Exploring Technical Drawings for Blind People

Helen Petrie; Christoph Schlieder; Paul Blenkhorn; David Gareth Evans; Alasdair King; Anne-Marie O'Neill; George T. Ioannidis; Blaithin Gallagher; David Crombie; Rolf Mager; Maurizio Alafaci

Blind people can access and use textual information effectively in a variety of ways - through Braille, audiotape or computer-based systems. Access and use of graphic information is much more problematic, with tactile versions both time-consuming and difficult to make and textual descriptions failing to provide independent access to the material. The TeDUB Project is developing a system which will automatically generate descriptions of certain classes of graphics (electronic circuit diagrams, UML diagrams and architectural plans) and allow blind people to explore them independently. This system has great potential in work, education and leisure domains to open up independent access to graphic materials for blind people.


The New Review of Hypermedia and Multimedia | 2004

Automated interpretation and accessible presentation of technical diagrams for blind people

Mirko Horstmann; Martin Lorenz; A. Watkowski; George T. Ioannidis; Otthein Herzog; Alasdair King; David Gareth Evans; Cornelius Hagen; Christoph Schlieder; Anne-Marie Burn; Neil King; Helen Petrie; Sijo Dijkstra; David Crombie

The EU-supported TeDUB (Technical Drawings Understanding for the Blind) project is developing a software system that aims to make technical diagrams accessible to blind and visually impaired people. It consists of two separate modules: one that analyses drawings either semi-automatically or automatically, and one that presents the results of this analysis to blind people and allows them to interact with it. The system is capable of analysing and presenting diagrams from a number of formally defined domains. A diagram enters the system as one of two types: first, diagrams contained in bitmap images, which do not explicitly contain the semantic structure of their content and thus have to be interpreted by the system, and second, diagrams obtained in a semantically enriched format that already yields this structure. The TeDUB system provides blind users with an interface to navigate and annotate these diagrams using a number of input and output devices. Extensive user evaluations have been carried out and an overall positive response from the participants has shown the effectiveness of the approach.


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2002

Full-screen magnification for windows using DirectX overlays

Paul Blenkhorn; David Gareth Evans; Alex Baude

This paper presents the basic features of software-based magnifiers used by some visually impaired people to read information from a computer screen. The paper briefly presents two major approaches to full-screen magnification for modern multiple window systems (the paper focuses on Microsoft Windows). This paper describes in detail the architecture and operation of a full-screen magnifier that uses Microsoft DirectX Overlays. This approach leads to a robust magnifier that has a low computational overhead. The magnifier has problems with video cards that use a YUV color model but these problems may be addressed by RGB to YUV translation software - an issue that is still to be investigated. The magnifier also has problems when the generic device driver, rather than the manufacturers device driver, is installed on the system. The paper presents two further strategies for full screen magnification, namely, using multimonitor support and true type fonts for text enlargement.


human factors in computing systems | 2003

Design and user evaluation of a joystick-operated full-screen magnifier

Sri Kurniawan; Alasdair King; David Gareth Evans; Paul Blenkhorn

The paper reports on two development cycles of a joystick-operated full-screen magnifier for visually impaired users. In the first cycle of evaluation, seven visually impaired computer users evaluated the system in comprehension-based sessions using text documents. After considering feedback from these evaluators, a second version of the system was produced and evaluated by a further six visually impaired users. The second evaluation was conducted using information-seeking tasks using Web pages. In both evaluations, the thinking aloud protocol was used. This study makes several contributions to the field. First, it is perhaps the first published study investigating the use of a joystick as an absolute and relative pointing device to control a screen magnifier. Second, the present study revealed that for most of the visually impaired users who participated in the study the joystick had good spatial, cognitive and ergonomic attributes, even for those who had never before used a joystick.


international conference of the ieee engineering in medicine and biology society | 2006

A Screen Magnifier Using “High Level” Implementation Techniques

Paul Blenkhorn; David Gareth Evans

This paper presents the architecture of, and the techniques used to build, a screen magnifier for visually impaired people that uses the high-level features of the Microsoft Windows operating system. The magnifier uses information from the Desktop Window as its source and overlays this with a topmost, transparent, layered window that contains the magnified image. Issues concerning cursor enlargement, tooltip suppression, and focus tracking are discussed. A stable magnifier results that does not need to use the dirty low-level techniques that are typically used to build screen magnifiers. The only known problem of the magnifier is that it fails to suppress the original, unmagnified cursor of the few applications that use custom cursors


international conference on computers helping people with special needs | 2002

An Approach to Producing New Languages for Talking Applications for Use by Blind People

David Gareth Evans; K. Polyzoaki; Paul Blenkhorn

This paper describes an approach that allows text-to-speech synthesisers to be produced for new languages for use with assistive applications. The approach uses a simple rule-based text-to-phoneme stage. The phonemes are passed to an existing phoneme-to-speech system for another language. We show that the match between the language to be synthesised and the language on which the phoneme-to-speech system is important for the perceived quality of the speech but not necessarily the understandability of speech.

Collaboration


Dive into the David Gareth Evans's collaboration.

Top Co-Authors

Avatar

Paul Blenkhorn

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Alasdair King

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

E.A. Draffan

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Sri Kurniawan

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Crombie

University of the Arts Utrecht

View shared research outputs
Top Co-Authors

Avatar

Abi James

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge