Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Adele F. Scott is active.

Publication


Featured researches published by Adele F. Scott.


international conference of the ieee engineering in medicine and biology society | 2011

Substituting depth for intensity and real-time phosphene rendering: Visual navigation under low vision conditions

Paulette Lieby; Nick Barnes; Chris McCarthy; Nianjun Liu; Hugh Dennett; Janine Walker; Viorica Botea; Adele F. Scott

Navigation and way finding including obstacle avoidance is difficult when visual perception is limited to low resolution, such as is currently available on a bionic eye. Depth visualisation may be a suitable alternative. Such an approach can be evaluated using simulated phosphenes with a wearable mobile virtual reality kit. In this paper, we present two novel approaches: (i) an implementation of depth visualisation; and, (ii) novel methods for rapid rendering of simulated phosphenes with an empirical comparison between them. Our new software-based method for simulated phosphene rendering shows large speed improvements, facilitating the display in real-time of a large number of phosphenes with size and brightness dependent on pixel intensity, and with customised output dynamic range. Further, we describe the protocol, navigation environment and system used for visual navigation experiments to evaluate the use of depth on low resolution simulations of a bionic eye perceptual experience. Results for these experiments show that a depth-based representation is effective for navigation, and shows significant advantages over intensity-based approaches when overhanging obstacles are present. The results of the experiments were reported in [1], [2].


Journal of Neural Engineering | 2015

Mobility and low contrast trip hazard avoidance using augmented depth

Chris McCarthy; Janine Walker; Paulette Lieby; Adele F. Scott; Nick Barnes

OBJECTIVE We evaluated a novel visual representation for current and near-term prosthetic vision. Augmented depth emphasizes ground obstacles and floor-wall boundaries in a depth-based visual representation. This is achieved by artificially increasing contrast between obstacles and the ground surface via a novel ground plane extraction algorithm specifically designed to preserve low-contrast ground-surface boundaries. APPROACH The effectiveness of augmented depth was examined in human mobility trials compared against standard intensity-based (Intensity), depth-based (Depth) and random (Random) visual representations. Eight participants with normal vision used simulated prosthetic vision with 20 phosphenes and eight perceivable brightness levels to traverse a course with randomly placed small and low-contrast obstacles on the ground. MAIN RESULTS The number of collisions was significantly reduced using augmented depth, compared with intensity, depth and random representations (48%, 44% and 72% less collisions, respectively). SIGNIFICANCE These results indicate that augmented depth may enable safe mobility in the presence of low-contrast obstacles with current and near-term implants. This is the first demonstration that an augmentation of the scene ensuring key objects are visible may provide better outcomes for prosthetic vision.


Journal of Neural Engineering | 2016

Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering

Nick Barnes; Adele F. Scott; Paulette Lieby; Matthew A. Petoe; Chris McCarthy; Ashley Stacey; Lauren N. Ayton; Nicholas C. Sinclair; Mohit N. Shivdasani; Nigel H. Lovell; Hugh J. McDermott; Janine Walker

OBJECTIVE One strategy to improve the effectiveness of prosthetic vision devices is to process incoming images to ensure that key information can be perceived by the user. This paper presents the first comprehensive results of vision function testing for a suprachoroidal retinal prosthetic device utilizing of 20 stimulating electrodes. Further, we investigate whether using image filtering can improve results on a light localization task for implanted participants compared to minimal vision processing. No controlled implanted participant studies have yet investigated whether vision processing methods that are not task-specific can lead to improved results. APPROACH Three participants with profound vision loss from retinitis pigmentosa were implanted with a suprachoroidal retinal prosthesis. All three completed multiple trials of a light localization test, and one participant completed multiple trials of acuity tests. The visual representations used were: Lanczos2 (a high quality Nyquist bandlimited downsampling filter); minimal vision processing (MVP); wide view regional averaging filtering (WV); scrambled; and, system off. MAIN RESULTS Using Lanczos2, all three participants successfully completed a light localization task and obtained a significantly higher percentage of correct responses than using MVP ([Formula: see text]) or with system off ([Formula: see text]). Further, in a preliminary result using Lanczos2, one participant successfully completed grating acuity and Landolt C tasks, and showed significantly better performance ([Formula: see text]) compared to WV, scrambled and system off on the grating acuity task. SIGNIFICANCE Participants successfully completed vision tasks using a 20 electrode suprachoroidal retinal prosthesis. Vision processing with a Nyquist bandlimited image filter has shown an advantage for a light localization task. This result suggests that this and targeted, more advanced vision processing schemes may become important components of retinal prostheses to enhance performance. ClinicalTrials.gov Identifier: NCT01503576.


international conference on automation, robotics and applications | 2000

Cooperative multi-agent mapping and exploration in Webots®

Adele F. Scott; Changbin Yu

This paper addresses the problem of mapping and exploration of an unknown space by cooperative multi-agent systems. The exploration problem is extended to jointly covering an area n times. Agents are localised and can communicate with other agents. A probability based mapping algorithm is developed. Based upon this, a potential field based exploration algorithm is proposed, with three sample charge profiles that can be used for different mission requirements. Simulation results in Webots confirm the scalability and the effectiveness of these algorithms. This research justifies and prepares for a full trial in a multi-robot testbed.


international conference of the ieee engineering in medicine and biology society | 2012

The role of vision processing in prosthetic vision

Nick Barnes; Xuming He; Chris McCarthy; Lachlan Horne; Junae Kim; Adele F. Scott; Paulette Lieby

Prosthetic vision provides vision which is reduced in resolution and dynamic range compared to normal human vision. This comes about both due to residual damage to the visual system from the condition that caused vision loss, and due to limitations of current technology. However, even with limitations, prosthetic vision may still be able to support functional performance which is sufficient for tasks which are key to restoring independent living and quality of life. Here vision processing can play a key role, ensuring that information which is critical to the performance of key tasks is available within the capability of the available prosthetic vision. In this paper, we frame vision processing for prosthetic vision, highlight some key areas which present problems in terms of quality of life, and present examples where vision processing can help achieve better outcomes.


Vision Research | 2017

Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing

Jessica Irons; Tamara Gradden; Angel Zhang; Xuming He; Nick Barnes; Adele F. Scott; Elinor McKone

ABSTRACT The visual prosthesis (or “bionic eye”) has become a reality but provides a low resolution view of the world. Simulating prosthetic vision in normal‐vision observers, previous studies report good face recognition ability using tasks that allow recognition to be achieved on the basis of information that survives low resolution well, including basic category (sex, age) and extra‐face information (hairstyle, glasses). Here, we test within‐category individuation for face‐only information (e.g., distinguishing between multiple Caucasian young men with hair covered). Under these conditions, recognition was poor (although above chance) even for a simulated 40 × 40 array with all phosphene elements assumed functional, a resolution above the upper end of current‐generation prosthetic implants. This indicates that a significant challenge is to develop methods to improve face identity recognition. Inspired by “bionic ear” improvements achieved by altering signal input to match high‐level perceptual (speech) requirements, we test a high‐level perceptual enhancement of face images, namely face caricaturing (exaggerating identity information away from an average face). Results show caricaturing improved identity recognition in memory and/or perception (degree by which two faces look dissimilar) down to a resolution of 32 × 32 with 30% phosphene dropout. Findings imply caricaturing may offer benefits for patients at resolutions realistic for some current‐generation or in‐development implants.


Investigative Ophthalmology & Visual Science | 2017

Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis

Matthew A. Petoe; Chris McCarthy; Mohit N. Shivdasani; Nicholas C. Sinclair; Adele F. Scott; Lauren N. Ayton; Nick Barnes; Robyn H. Guymer; Penelope J. Allen; Peter J. Blamey

Purpose With a retinal prosthesis connected to a head-mounted camera, subjects can perform low vision tasks using a combination of electrode discrimination and head-directed localization. The objective of the present study was to investigate the contribution of retinotopic electrode discrimination (perception corresponding to the arrangement of the implanted electrodes with respect to their position beneath the retina) to visual performance for three recipients of a 24-channel suprachoroidal retinal implant. Proficiency in retinotopic discrimination may allow good performance with smaller head movements, and identification of this ability would be useful for targeted rehabilitation. Methods Three participants with retinitis pigmentosa performed localization and grating acuity assessments using a suprachoroidal retinal prosthesis. We compared retinotopic and nonretinotopic electrode mapping and hypothesized that participants with measurable acuity in a normal retinotopic condition would be negatively impacted by the nonretinotopic condition. We also expected that participants without measurable acuity would preferentially use head movement over retinotopic information. Results Only one participant was able to complete the grating acuity task. In the localization task, this participant exhibited significantly greater head movements and significantly lower localization scores when using the nonretinotopic electrode mapping. There was no significant difference in localization performance or head movement for the remaining two subjects when comparing retinotopic to nonretinotopic electrode mapping. Conclusions Successful discrimination of retinotopic information is possible with a suprachoroidal retinal prosthesis. Head movement behavior during a localization task can be modified using a nonretinotopic mapping. Behavioral comparisons using retinotopic and nonretinotopic electrode mapping may be able to highlight deficiencies in retinotopic discrimination, with a view to address these deficiencies in a rehabilitation environment. (ClinicalTrials.gov number, NCT01603576).


australian communications theory workshop | 2007

Effects of Beamforming on the Connectivity of AdHoc Networks

Xiangyun Zhou; Haley M. Jones; Salman Durrani; Adele F. Scott


'Translational Research: Seeing the possibilities: Translational Research Bridges Scientific Discoveries And The Treatment Of Vision Disorders', The Annual Meeting of the Association for Research in Vision and Ophthalmology (ARVO), Fort Lauderdale, Florida, United States, 6-10 May 2012 | 2012

Low contrast trip hazard avoidance with simulated prosthetic vision

Chris McCarthy; Paulette Lieby; Janine Walker; Adele F. Scott; Viorica Botea; Nick Barnes


Investigative Ophthalmology & Visual Science | 2014

Vision Processing with Lanczos2 Improves Low Vision Test Results in Implanted Visual Prosthetic Patients

Nick Barnes; Adele F. Scott; Ashley Stacey; Paulette Lieby; Chris McCarthy; Matthew A. Petoe; Lauren N. Ayton; Mohit N. Shivdasani; Nicholas C. Sinclair; Janine Walker

Collaboration


Dive into the Adele F. Scott's collaboration.

Top Co-Authors

Avatar

Nick Barnes

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuming He

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Angel Zhang

Australian National University

View shared research outputs
Researchain Logo
Decentralizing Knowledge