Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven McDonagh is active.

Publication


Featured researches published by Steven McDonagh.


PLOS Computational Biology | 2015

Laminar and dorsoventral molecular organization of the medial entorhinal cortex revealed by large-scale anatomical analysis of gene expression.

Helen L. Ramsden; Gülşen Sürmeli; Steven McDonagh; Matthew F. Nolan

Neural circuits in the medial entorhinal cortex (MEC) encode an animal’s position and orientation in space. Within the MEC spatial representations, including grid and directional firing fields, have a laminar and dorsoventral organization that corresponds to a similar topography of neuronal connectivity and cellular properties. Yet, in part due to the challenges of integrating anatomical data at the resolution of cortical layers and borders, we know little about the molecular components underlying this organization. To address this we develop a new computational pipeline for high-throughput analysis and comparison of in situ hybridization (ISH) images at laminar resolution. We apply this pipeline to ISH data for over 16,000 genes in the Allen Brain Atlas and validate our analysis with RNA sequencing of MEC tissue from adult mice. We find that differential gene expression delineates the borders of the MEC with neighboring brain structures and reveals its laminar and dorsoventral organization. We propose a new molecular basis for distinguishing the deep layers of the MEC and show that their similarity to corresponding layers of neocortex is greater than that of superficial layers. Our analysis identifies ion channel-, cell adhesion- and synapse-related genes as candidates for functional differentiation of MEC layers and for encoding of spatial information at different scales along the dorsoventral axis of the MEC. We also reveal laminar organization of genes related to disease pathology and suggest that a high metabolic demand predisposes layer II to neurodegenerative pathology. In principle, our computational pipeline can be applied to high-throughput analysis of many forms of neuroanatomical data. Our results support the hypothesis that differences in gene expression contribute to functional specialization of superficial layers of the MEC and dorsoventral organization of the scale of spatial representations.


british machine vision conference | 2013

Point Light Source Estimation based on Scenes Recorded by a RGB-D camera.

Bastiaan Johannes Boom; Sergio Orts-Escolano; Xin X. Ning; Steven McDonagh; Peter Sandilands; Robert B. Fisher

Bastiaan J. Boom1 http://homepages.inf.ed.ac.uk/bboom/ Sergio Orts-Escolano2 http://www.dtic.ua.es/~sorts/ Xin X. Ning1 [email protected] Steven McDonagh1 http://homepages.inf.ed.ac.uk/s0458953/ Peter Sandilands1 http://homepages.inf.ed.ac.uk/s0569500/ Robert B. Fisher1 http://homepages.inf.ed.ac.uk/rbf/ 1 Institute of Perception, Action and Behaviour University of Edinburgh Edinburgh, UK 2 Computer Technology Department University of Alicante Alicante, Spain


international conference on computer graphics and interactive techniques | 2016

Adaptive polynomial rendering

Bochang Moon; Steven McDonagh; Kenny Mitchell; Markus H. Gross

In this paper, we propose a new adaptive rendering method to improve the performance of Monte Carlo ray tracing, by reducing noise contained in rendered images while preserving high-frequency edges. Our method locally approximates an image with polynomial functions and the optimal order of each polynomial function is estimated so that our reconstruction error can be minimized. To robustly estimate the optimal order, we propose a multi-stage error estimation process that iteratively estimates our reconstruction error. In addition, we present an energy-preserving outlier removal technique to remove spike noise without causing noticeable energy loss in our reconstruction result. Also, we adaptively allocate additional ray samples to high error regions guided by our error estimation. We demonstrate that our approach outperforms state-of-the-art methods by controlling the tradeoff between reconstruction bias and variance through locally defining our polynomial order, even without need for filtering bandwidth optimization, the common approach of other recent methods.


arXiv: Computer Vision and Pattern Recognition | 2017

Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

Konstantinos Kamnitsas; Wenjia Bai; Enzo Ferrante; Steven McDonagh; Matthew Sinclair; Nick Pawlowski; Martin Rajchl; Matthew C. H. Lee; Bernhard Kainz; Daniel Rueckert; Ben Glocker

Deep learning approaches such as convolutional neural nets have consistently outperformed previous methods on challenging tasks such as dense, semantic segmentation. However, the various proposed networks perform differently, with behaviour largely influenced by architectural choices and training settings. This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods. The approach reduces the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database. EMMA can be seen as an unbiased, generic deep learning model which is shown to yield excellent performance, winning the first position in the BRATS 2017 competition among 50+ participating teams.


medical image computing and computer-assisted intervention | 2017

Predicting slice-to-volume transformation in presence of arbitrary subject motion

Benjamin Hou; Amir Alansary; Steven McDonagh; Alice Davidson; Mary A. Rutherford; Joseph V. Hajnal; Daniel Rueckert; Ben Glocker; Bernhard Kainz

This paper aims to solve a fundamental problem in intensity-based 2D/3D registration, which concerns the limited capture range and need for very good initialization of state-of-the-art image registration methods. We propose a regression approach that learns to predict rotations and translations of arbitrary 2D image slices from 3D volumes, with respect to a learned canonical atlas co-ordinate system. To this end, we utilize Convolutional Neural Networks (CNNs) to learn the highly complex regression function that maps 2D image slices into their correct position and orientation in 3D space. Our approach is attractive in challenging imaging scenarios, where significant subject motion complicates reconstruction performance of 3D volumes from 2D slice data. We extensively evaluate the effectiveness of our approach quantitatively on simulated MRI brain data with extreme random motion. We further demonstrate qualitative results on fetal MRI where our method is integrated into a full reconstruction and motion compensation pipeline. With our CNN regression approach we obtain an average prediction error of 7 mm on simulated data, and convincing reconstruction quality of images of very young fetuses where previous methods fail. We further discuss applications to Computed Tomography (CT) and X-Ray projections. Our approach is a general solution to the 2D/3D initialization problem. It is computationally efficient, with prediction times per slice of a few milliseconds, making it suitable for real-time scenarios.


IEEE Transactions on Medical Imaging | 2017

PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI

Amir Alansary; Martin Rajchl; Steven McDonagh; Maria Murgasova; Mellisa Damodaram; David F. A. Lloyd; Alice Davidson; Mary A. Rutherford; Joseph V. Hajnal; Daniel Rueckert; Bernhard Kainz

In this paper, we present a novel method for the correction of motion artifacts that are present in fetal magnetic resonance imaging (MRI) scans of the whole uterus. Contrary to current slice-to-volume registration (SVR) methods, requiring an inflexible anatomical enclosure of a single investigated organ, the proposed patch-to-volume reconstruction (PVR) approach is able to reconstruct a large field of view of non-rigidly deforming structures. It relaxes rigid motion assumptions by introducing a specific amount of redundant information that is exploited with parallelized patchwise optimization, super-resolution, and automatic outlier rejection. We further describe and provide an efficient parallel implementation of PVR allowing its execution within reasonable time on commercially available graphics processing units, enabling its use in the clinical practice. We evaluate PVR’s computational overhead compared with standard methods and observe improved reconstruction accuracy in the presence of affine motion artifacts compared with conventional SVR in synthetic experiments. Furthermore, we have evaluated our method qualitatively and quantitatively on real fetal MRI data subject to maternal breathing and sudden fetal movements. We evaluate peak-signal-to-noise ratio, structural similarity index, and cross correlation with respect to the originally acquired data and provide a method for visual inspection of reconstruction uncertainty. We further evaluate the distance error for selected anatomical landmarks in the fetal head, as well as calculating the mean and maximum displacements resulting from automatic non-rigid registration to a motion-free ground truth image. These experiments demonstrate a successful application of PVR motion compensation to the whole fetal body, uterus, and placenta.


international conference on 3d vision | 2016

Synthetic Prior Design for Real-Time Face Tracking

Steven McDonagh; Martin Klaudiny; Derek Bradley; Thabo Beeler; Iain A. Matthews; Kenny Mitchell

Real-time facial performance capture has recently been gaining popularity in virtual film production, driven by advances in machine learning, which allows for fast inference of facial geometry from video streams. These learning-based approaches are significantly influenced by the quality and amount of labelled training data. Tedious construction of training sets from real imagery can be replaced by rendering a facial animation rig under on-set conditions expected at runtime. We learn a synthetic actor-specific prior by adapting a state-of-the-art facial tracking method. Synthetic training significantly reduces the capture and annotation burden and in theory allows generation of an arbitrary amount of data. But practical realities such as training time and compute resources still limit the size of any training set. We construct better and smaller training sets by investigating which facial image appearances are crucial for tracking accuracy, covering the dimensions of expression, viewpoint and illumination. A reduction of training data in 1-2 orders of magnitude is demonstrated whilst tracking accuracy is retained for challenging on-set footage.


Computer Graphics Forum | 2017

Real-Time Multi-View Facial Capture with Synthetic Training

Martin Klaudiny; Steven McDonagh; Derek Bradley; Thabo Beeler; Kenny Mitchell

We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.


IEEE Transactions on Medical Imaging | 2018

3-D Reconstruction in Canonical Co-Ordinate Space From Arbitrarily Oriented 2-D Images

Benjamin Hou; Bishesh Khanal; Amir Alansary; Steven McDonagh; Alice Davidson; Mary A. Rutherford; Joseph V. Hajnal; Daniel Rueckert; Ben Glocker; Bernhard Kainz

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2-D/3-D image registration methods, can significantly degrade the performance of 3-D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion, such as fetal in-utero imaging, complicate the 3-D image and volume reconstruction process. In this paper, we present a learning-based image registration method capable of predicting 3-D rigid transformations of arbitrarily oriented 2-D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations, we utilize a convolutional neural network architecture to learn the regression function capable of mapping 2-D image slices to a 3-D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated magnetic resonance imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2-D/3-D registration initialization problem and is suitable for real-time scenarios.


Computer Graphics Forum | 2017

Noise Reduction on G‐Buffers for Monte Carlo Filtering

Bochang Moon; Jose A. Iglesias-Guitian; Steven McDonagh; Kenny Mitchell

We propose a novel pre‐filtering method that reduces the noise introduced by depth‐of‐field and motion blur effects in geometric buffers (G‐buffers) such as texture, normal and depth images. Our pre‐filtering uses world positions and their variances to effectively remove high‐frequency noise while carefully preserving high‐frequency edges in the G‐buffers. We design a new anisotropic filter based on a per‐pixel covariance matrix of world position samples. A general error estimator, Steins unbiased risk estimator, is then applied to estimate the optimal trade‐off between the bias and variance of pre‐filtered results. We have demonstrated that our pre‐filtering improves the results of existing filtering methods numerically and visually for challenging scenes where depth‐of‐field and motion blurring introduce a significant amount of noise in the G‐buffers.

Collaboration


Dive into the Steven McDonagh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Glocker

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Benjamin Hou

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge