Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris McCarthy is active.

Publication


Featured researches published by Chris McCarthy.


computer vision and pattern recognition | 2016

Local Background Enclosure for RGB-D Salient Object Detection

David Feng; Nick Barnes; Shaodi You; Chris McCarthy

Recent work in salient object detection has considered the incorporation of depth cues from RGB-D images. In most cases, depth contrast is used as the main feature. However, areas of high contrast in background regions cause false positives for such methods, as the background frequently contains regions that are highly variable in depth. Here, we propose a novel RGB-D saliency feature. Local Background Enclosure (LBE) captures the spread of angular directions which are background with respect to the candidate region and the object that it is part of. We show that our feature improves over state-of-the-art RGB-D saliency approaches as well as RGB methods on the RGBD1000 and NJUDS2000 datasets.


Monthly Notices of the Royal Astronomical Society | 2017

Finding strong lenses in CFHTLS using convolutional neural networks

Colin Jacobs; Karl Glazebrook; Thomas E. Collett; Anupreeta More; Chris McCarthy

We train and apply convolutional neural networks, a machine learning technique developed to learn from and classify image data, to Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) imaging for the identification of potential strong lensing systems. An ensemble of four convolutional neural networks was trained on images of simulated galaxy-galaxy lenses. The training sets consisted of a total of 62,406 simulated lenses and 64,673 non-lens negative examples generated with two different methodologies. The networks were able to learn the features of simulated lenses with accuracy of up to 99.8% and a purity and completeness of 94-100% on a test set of 2000 simulations. An ensemble of trained networks was applied to all of the 171 square degrees of the CFHTLS wide field image data, identifying 18,861 candidates including 63 known and 139 other potential lens candidates. A second search of 1.4 million early type galaxies selected from the survey catalog as potential deflectors, identified 2,465 candidates including 117 previously known lens candidates, 29 confirmed lenses/high-quality lens candidates, 266 novel probable or potential lenses and 2097 candidates we classify as false positives. For the catalog-based search we estimate a completeness of 21-28% with respect to detectable lenses and a purity of 15%, with a false-positive rate of 1 in 671 images tested. We predict a human astronomer reviewing candidates produced by the system would identify ~20 probable lenses and 100 possible lenses per hour in a sample selected by the robot. Convolutional neural networks are therefore a promising tool for use in the search for lenses in current and forthcoming surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope.


Computer Vision and Image Understanding | 2016

Semantic labeling for prosthetic vision

Lachlan Horne; Jose M. Alvarez; Chris McCarthy; Mathieu Salzmann; Nick Barnes

Abstract Current and near-term implantable prosthetic vision systems offer the potential to restore some visual function, but suffer from limited resolution and dynamic range of induced visual percepts. This can make navigating complex environments difficult for users. We introduce semantic labeling as a technique to improve navigation outcomes for prosthetic vision users. We produce a novel egocentric vision dataset to demonstrate how semantic labeling can be applied to this problem. We also improve the speed of semantic labeling with sparse computation of unary potentials, enabling its use in real-time wearable assistive devices. We use simulated prosthetic vision to demonstrate the results of our technique. Our approach allows a prosthetic vision system to selectively highlight specific classes of objects in the user’s field of view, improving the user’s situational awareness.


australasian computer-human interaction conference | 2015

Robots in Rehab: Towards socially assistive robots for paediatric rehabilitation

Chris McCarthy; Jo Butchart; Michael George; Dee Kerr; Hugh Kingsley; Adam Scheinberg; Leon Sterling

There is increasing interest in the use of socially assistive robots to enhance therapeutic outcomes in paediatric health care. In this paper we report on experiential use of the humanoid NAO robot (Aldebaran Robotics) in a paediatric rehabilitation setting. This forms part of a proposed study assessing the clinical benefits of introducing the NAO as a therapeutic intervention in the rehabilitation program of children with cerebral palsy (CP). This study is in partnership with the rehabilitation clinic of Melbournes Royal Childrens Hospital. Drawing on five months of regular weekly engagement with the rehabilitation clinic, we propose roles and supporting capabilities for the NAO that aim to enhance rehabilitation outcomes for patients and support the typical workflow of therapists. We provide an overview of development work conducted to support these roles, and discuss future work and technical challenges to be addressed in preparation for the study.


Investigative Ophthalmology & Visual Science | 2017

Determining the Contribution of Retinotopic Discrimination to Localization Performance With a Suprachoroidal Retinal Prosthesis

Matthew A. Petoe; Chris McCarthy; Mohit N. Shivdasani; Nicholas C. Sinclair; Adele F. Scott; Lauren N. Ayton; Nick Barnes; Robyn H. Guymer; Penelope J. Allen; Peter J. Blamey

PurposenWith a retinal prosthesis connected to a head-mounted camera, subjects can perform low vision tasks using a combination of electrode discrimination and head-directed localization. The objective of the present study was to investigate the contribution of retinotopic electrode discrimination (perception corresponding to the arrangement of the implanted electrodes with respect to their position beneath the retina) to visual performance for three recipients of a 24-channel suprachoroidal retinal implant. Proficiency in retinotopic discrimination may allow good performance with smaller head movements, and identification of this ability would be useful for targeted rehabilitation.nnnMethodsnThree participants with retinitis pigmentosa performed localization and grating acuity assessments using a suprachoroidal retinal prosthesis. We compared retinotopic and nonretinotopic electrode mapping and hypothesized that participants with measurable acuity in a normal retinotopic condition would be negatively impacted by the nonretinotopic condition. We also expected that participants without measurable acuity would preferentially use head movement over retinotopic information.nnnResultsnOnly one participant was able to complete the grating acuity task. In the localization task, this participant exhibited significantly greater head movements and significantly lower localization scores when using the nonretinotopic electrode mapping. There was no significant difference in localization performance or head movement for the remaining two subjects when comparing retinotopic to nonretinotopic electrode mapping.nnnConclusionsnSuccessful discrimination of retinotopic information is possible with a suprachoroidal retinal prosthesis. Head movement behavior during a localization task can be modified using a nonretinotopic mapping. Behavioral comparisons using retinotopic and nonretinotopic electrode mapping may be able to highlight deficiencies in retinotopic discrimination, with a view to address these deficiencies in a rehabilitation environment. (ClinicalTrials.gov number, NCT01603576).


australasian computer-human interaction conference | 2016

Help me help you: a human-assisted social robot in paediatric rehabilitation

F. Martí Carrillo; Joanna Butchart; Sarah Knight; Adam Scheinberg; Lisa Wise; Leon Sterling; Chris McCarthy

Socially assistive robots show great potential for boosting therapeutic outcomes in children undergoing intensive rehabilitation. However, the introduction of an additional interactive presence also imposes new demands on the therapist. In this preliminary study we explore the time costs and issues associated with the inclusion of a semi-autonomous assistive robot in paediatric rehabilitation sessions.


human robot interaction | 2017

In-Situ Design and Development of a Socially Assistive Robot for Paediatric Rehabilitation

Felip Martí Carrillo; Jo Butchart; Sarah Knight; Adam Scheinberg; Lisa Wise; Leon Sterling; Chris McCarthy

We present the in-situ design and development of a general purpose social robot (NAO) as a therapeutic aid for paediatric rehabilitation. We describe our two-phase design approach, emphasising frequent patient/parent/therapist engagement and outline roles and requirements for our SAR prototype derived from this process. Our SAR prototype has now been deployed in the rehabilitation program of 9 patients with cerebral palsy, across 14 sessions where evaluation and iterative development is ongoing.


BMJ Open | 2017

Optimising technology to measure functional vision, mobility and service outcomes for people with low vision or blindness: protocol for a prospective cohort study in Australia and Malaysia

Lil Deverell; Denny Meyer; Bee Theng Lau; Abdullah Al Mahmud; Suku Sukunesan; Jahar Lal Bhowmik; Almon Chai; Chris McCarthy; Pan Zheng; Andrew Pipingas; Fakir M. Amirul Islam

Introduction Orientation and mobility (O&M) specialists assess the functional vision and O&M skills of people with mobility problems, usually relating to low vision or blindness. There are numerous O&M assessment checklists but no measures that reduce qualitative assessment data to a single comparable score suitable for assessing any O&M client, of any age or ability, in any location. Functional measures are needed internationally to align O&M assessment practices, guide referrals, profile O&M clients, plan appropriate services and evaluate outcomes from O&M programmes (eg, long cane training), assistive technology (eg, hazard sensors) and medical interventions (eg, retinal implants). This study aims to validate two new measures of functional performance vision-related outcomes in orientation and mobility (VROOM) and orientation and mobility outcomes (OMO) in the context of ordinary O&M assessments in Australia, with cultural comparisons in Malaysia, also developing phone apps and online training to streamline professional assessment practices. Methods and analysis This multiphase observational study will employ embedded mixed methods with a qualitative/quantitative priority: corating functional vision and O&M during social inquiry. Australian O&M agencies (n=15) provide the sampling frame. O&M specialists will use quota sampling to generate cross-sectional assessment data (n=400) before investigating selected cohorts in outcome studies. Cultural relevance of the VROOM and OMO tools will be investigated in Malaysia, where the tools will inform the design of assistive devices and evaluate prototypes. Exploratory and confirmatory factor analysis, Rasch modelling, cluster analysis and analysis of variance will be undertaken along with descriptive analysis of measurement data. Qualitative findings will be used to interpret VROOM and OMO scores, filter statistically significant results, warrant their generalisability and identify additional relevant constructs that could also be measured. Ethics and dissemination Ethical approval has been granted by the Human Research Ethics Committee at Swinburne University (SHR Project 2016/316). Dissemination of results will be via agency reports, journal articles and conference presentations.


Urban Forestry & Urban Greening | 2018

How eye-catching are natural features when walking through a park? Eye-tracking responses to videos of walks

Marco Amati; Ebadat Parmehr; Chris McCarthy; Jodi Sita


Urban Forestry & Urban Greening | 2018

How eye-catching are natural features when walking through a park? Eye-tracking responses to videos of walks [accepted manuscript]

Jodi Sita; Marco Amati; Ebadat G. Parmehr; Chris McCarthy

Collaboration


Dive into the Chris McCarthy's collaboration.

Top Co-Authors

Avatar

Leon Sterling

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam Scheinberg

Royal Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Jodi Sita

Australian Catholic University

View shared research outputs
Top Co-Authors

Avatar

Nick Barnes

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jo Butchart

Royal Children's Hospital

View shared research outputs
Top Co-Authors

Avatar

Lisa Wise

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sarah Knight

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Abdullah Al Mahmud

Swinburne University of Technology

View shared research outputs
Top Co-Authors

Avatar

Almon Chai

Swinburne University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge