Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oisin Mac Aodha is active.

Publication


Featured researches published by Oisin Mac Aodha.


computer vision and pattern recognition | 2017

Unsupervised Monocular Depth Estimation with Left-Right Consistency

Clément Godard; Oisin Mac Aodha; Gabriel J. Brostow

Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.


european conference on computer vision | 2012

Patch based synthesis for single depth image super-resolution

Oisin Mac Aodha; Neill D. F. Campbell; Arun Nair; Gabriel J. Brostow

We present an algorithm to synthetically increase the resolution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to improve, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific processing, such as noise removal and correct patch normalization, dramatically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Learning a Confidence Measure for Optical Flow

Oisin Mac Aodha; Ahmad Humayun; Marc Pollefeys; Gabriel J. Brostow

We present a supervised learning-based method to estimate a per-pixel confidence for optical flow vectors. Regions of low texture and pixels close to occlusion boundaries are known to be difficult for optical flow algorithms. Using a spatiotemporal feature vector, we estimate if a flow algorithm is likely to fail in a given region. Our method is not restricted to any specific class of flow algorithm and does not make any scene specific assumptions. By automatically learning this confidence, we can combine the output of several computed flow fields from different algorithms to select the best performing algorithm per pixel. Our optical flow confidence measure allows one to achieve better overall results by discarding the most troublesome pixels. We illustrate the effectiveness of our method on four different optical flow algorithms over a variety of real and synthetic sequences. For algorithm selection, we achieve the top overall results on a large test set, and at times even surpass the results of the best algorithm among the candidates.


computer vision and pattern recognition | 2010

Segmenting video into classes of algorithm-suitability

Oisin Mac Aodha; Gabriel J. Brostow; Marc Pollefeys

Given a set of algorithms, which one(s) should you apply to, i) compute optical flow, or ii) perform feature matching? Would looking at the sequence in question help you decide? It is unclear if even a person with intimate knowledge of all the different algorithms and access to the sequence itself could predict which one to apply. Our hypothesis is that the most suitable algorithm can be chosen for each video automatically, through supervised training of a classifier. The classifier treats the different algorithms as black-box alternative “classes,” and predicts when each is best because of their respective performances on training examples where ground truth flow was available. Our experiments show that a simple Random Forest classifier is predictive of algorithm-suitability. The automatic feature selection makes use of both our spatial and temporal video features. We find that algorithm-suitability can be determined per-pixel, capitalizing on the heterogeneity of appearance and motion within a video. We demonstrate our learned region segmentation approach quantitatively using four available flow algorithms, on both known and novel image sequences with ground truth flow. We achieve performance that often even surpasses that of the one best algorithm at our disposal.


computer vision and pattern recognition | 2014

Hierarchical Subquery Evaluation for Active Learning on a Graph

Oisin Mac Aodha; Neill D. F. Campbell; Jan Kautz; Gabriel J. Brostow

To train good supervised and semi-supervised object classifiers, it is critical that we not waste the time of the human experts who are providing the training labels. Existing active learning strategies can have uneven performance, being efficient on some datasets but wasteful on others, or inconsistent just between runs on the same dataset. We propose perplexity based graph construction and a new hierarchical subquery evaluation algorithm to combat this variability, and to release the potential of Expected Error Reduction. Under some specific circumstances, Expected Error Reduction has been one of the strongest-performing informativeness criteria for active learning. Until now, it has also been prohibitively costly to compute for sizeable datasets. We demonstrate our highly practical algorithm, comparing it to other active learning measures on classification datasets that vary in sparsity, dimensionality, and size. Our algorithm is consistent over multiple runs and achieves high accuracy, while querying the human expert for labels at a frequency that matches their desired time budget.


computer vision and pattern recognition | 2016

Structured Prediction of Unobserved Voxels from a Single Depth Image

Michael Firman; Oisin Mac Aodha; Simon J. Julier; Gabriel J. Brostow

Building a complete 3D model of a scene, given only a single depth image, is underconstrained. To gain a full volumetric model, one needs either multiple views, or a single view together with a library of unambiguous 3D models that will fit the shape of each individual object in the scene. We hypothesize that objects of dissimilar semantic classes often share similar 3D shape components, enabling a limited dataset to model the shape of a wide range of objects, and hence estimate their hidden geometry. Exploring this hypothesis, we propose an algorithm that can complete the unobserved geometry of tabletop-sized objects, based on a supervised model trained on already available volumetric elements. Our model maps from a local observation in a single depth image to an estimate of the surface shape in the surrounding neighborhood. We validate our approach both qualitatively and quantitatively on a range of indoor object collections and challenging real scenes.


international conference on computer vision | 2013

Revisiting Example Dependent Cost-Sensitive Learning with Decision Trees

Oisin Mac Aodha; Gabriel J. Brostow

Typical approaches to classification treat class labels as disjoint. For each training example, it is assumed that there is only one class label that correctly describes it, and that all other labels are equally bad. We know however, that good and bad labels are too simplistic in many scenarios, hurting accuracy. In the realm of example dependent cost-sensitive learning, each label is instead a vector representing a data points affinity for each of the classes. At test time, our goal is not to minimize the misclassification rate, but to maximize that affinity. We propose a novel example dependent cost-sensitive impurity measure for decision trees. Our experiments show that this new impurity measure improves test performance while still retaining the fast test times of standard classification trees. We compare our approach to classification trees and other cost-sensitive methods on three computer vision problems, tracking, descriptor matching, and optical flow, and show improvements in all three domains.


ACM Transactions on Graphics | 2016

My Text in Your Handwriting

Tom Fincham Haines; Oisin Mac Aodha; Gabriel J. Brostow

There are many scenarios where we wish to imitate a specific author’s pen-on-paper handwriting style. Rendering new text in someone’s handwriting is difficult because natural handwriting is highly variable, yet follows both intentional and involuntary structure that makes a person’s style self-consistent. The variability means that naive example-based texture synthesis can be conspicuously repetitive. We propose an algorithm that renders a desired input string in an author’s handwriting. An annotated sample of the author’s handwriting is required; the system is flexible enough that historical documents can usually be used with only a little extra effort. Experiments show that our glyph-centric approach, with learned parameters for spacing, line thickness, and pressure, produces novel images of handwriting that look hand-made to casual observers, even when printed on paper.


international conference on pattern recognition | 2014

Putting the Scientist in the Loop -- Accelerating Scientific Progress with Interactive Machine Learning

Oisin Mac Aodha; Vassilios Stathopoulos; Gabriel J. Brostow; Michael Terry; Mark A. Girolami; Kate E. Jones

Technology drives advances in science. Giving scientists access to more powerful tools for collecting and understanding data enables them to both ask and answer new kinds questions that were previously beyond their reach. Of these new tools at their disposal, machine learning offers the opportunity to understand and analyze data at unprecedented scales and levels of detail. The standard machine learning pipeline consists of data labeling, feature extraction, training, and evaluation. However, without expert machine learning knowledge, it is difficult for scientists to optimally construct this pipeline to fully leverage machine learning in their work. Using ecology as a motivating example, we analyze a typical scientists data collection and processing workflow and highlight many problems facing practitioners when attempting to capitalize on advances in machine learning and pattern recognition. Understanding these shortcomings allows us to outline several novel and underexplored research directions. We end with recommendations to motivate progress in future cross-disciplinary work.


bioRxiv | 2018

VideoTagger: User-Friendly Software for Annotating Video Experiments of Any Duration

Peter Rennert; Oisin Mac Aodha; Matthew D.W. Piper; Gabriel J. Brostow

Background Scientific insight is often sought by recording and analyzing large quantities of video. While easy access to cameras has increased the quantity of collected videos, the rate at which they can be analyzed remains a major limitation. Often, bench scientists struggle with the most basic problem that there is currently no user-friendly, flexible, and open source software tool with which to watch and annotate these videos. Results We have created the VideoTagger tool to address these and many of the other associated challenges of video analysis. VideoTagger allows non-programming users to efficiently explore, annotate, and visualize large quantities of video data, within their existing experimental protocols. Further, it is built to accept programmed plugins written in Python, to enable seamless integration with other sophisticated computer-aided analyses. We tested VideoTagger ourselves, and have a growing base of users in other scientific disciplines. Capitalising on the unique features of VideoTagger to play back infinite lengths of video footage at various speeds, we annotated 39h of a Drosophila melanogaster lifespan video, at approximately 10-15x faster than real-time. We then used these labels to train a machine-learning plugin, which we used to annotate an additional 538h of footage automatically. In this way, we found that flies fall over spontaneously with increasing frequency as they age, and also spend longer durations struggling to right themselves. Ageing in flies is typically defined by length of life. We propose that this new mobility measure of ageing could help the discovery of mechanisms in biogerontology, refining our definition of what healthy ageing means in this extremely small, but widely used, invertebrate. Conclusions We show how VideoTagger is sufficiently flexible for studying lengthy and/or numerous video experiments, thus directly improving scientists’ productivity across varied domains.

Collaboration


Dive into the Oisin Mac Aodha's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pietro Perona

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yisong Yue

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Clément Godard

University College London

View shared research outputs
Top Co-Authors

Avatar

Shihan Su

California Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kate E. Jones

University College London

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge