Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Osian Haines is active.

Publication


Featured researches published by Osian Haines.


british machine vision conference | 2012

Detecting planes and estimating their orientation from a single image

Osian Haines; Andrew D Calway

We propose an algorithm to detect planes in a single image of an outdoor urban scene, capable of identifying multiple distinct planes, and estimating their orientation. Using machine learning techniques, we learn the relationship between appearance and structure from a large set of labelled examples. Plane detection is achieved by classifying multiple overlapping image regions, in order to obtain an initial estimate of planarity for a set of points, which are segmented into planar and non-planar regions using a sequence of Markov random fields. This differs from previous methods in that it does not rely on line detection, and is able to predict an actual orientation for planes. We show that the method is able to reliably extract planes in a variety of scenes, and compares favourably with existing methods.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Recognising Planes in a Single Image

Osian Haines; Andrew D Calway

We present a novel method to recognise planar structures in a single image and estimate their 3D orientation. This is done by exploiting the relationship between image appearance and 3D structure, using machine learning methods with supervised training data. As such, the method does not require specific features or use geometric cues, such as vanishing points. We employ general feature representations based on spatiograms of gradients and colour, coupled with relevance vector machines for classification and regression. We first show that using hand-labelled training data, we are able to classify pre-segmented regions as being planar or not, and estimate their 3D orientation. We then incorporate the method into a segmentation algorithm to detect multiple planar structures from a previously unseen image.


european conference on computer vision | 2014

Multi-User Egocentric Online System for Unsupervised Assistance on Object Usage

Dima Damen; Osian Haines; Teesid Leelasawassuk; Andrew D Calway; Walterio W. Mayol-Cuevas

We present an online fully unsupervised approach for automatically extracting video guides of how objects are used from wearable gaze trackers worn by multiple users. Given egocentric video and eye gaze from multiple users performing tasks, the system discovers task-relevant objects and automatically extracts guidance videos on how these objects have been used. In the assistive mode, the paper proposes a method for selecting a suitable video guide to be displayed to a novice user indicating how to use an object, purely triggered by the user’s gaze. The approach is tested on a variety of daily tasks ranging from opening a door, to preparing coffee and operating a gym machine.


international conference on robotics and automation | 2013

Visual mapping using learned structural priors

Osian Haines; Jose Martinez-Carranza; Andrew D Calway

We investigate a new approach to vision based mapping, in which single image structure recognition is used to derive strong priors for initialisation of higher-level primitives in the map. This can reduce state size and speed up the building of more meaningful maps. We focus on plane mapping and use a recognition algorithm to detect and estimate the 3D orientation of planar structures in key frames, which are then used as priors for initialising planes in the map. The recognition algorithm learns the relationship between such structure and appearance from training examples offline. We demonstrate the approach in the context of an EKF based visual odometry system. Preliminary results of experiments in urban environments show that the system is able to build large maps with significant planar structure at average frames rates of around 60 fps whilst maintaining good trajectory estimation. The results suggest that the approach has considerable potential.


international conference on computer vision theory and applications | 2015

Using Inertial Data to Enhance Image Segmentation - Knowing Camera Orientation Can Improve Segmentation of Outdoor Scenes

Osian Haines; David R. Bull; Jeremy F. Burn

In the context of semantic image segmentation, we show that knowledge of world-centric camera orientation (from an inertial sensor) can be used to improve classification accuracy. This works because certain structural classes (such as the ground) tend to appear in certain positions relative to the viewer. We show that orientation information is useful in conjunction with typical image-based features, and that fusing the two results in substantially better classification accuracy than either alone – we observed an increase from 61% to 71% classification accuracy, over the six classes in our test set, when orientation information was added. The method is applied to segmentation using both points and lines, and we also show that combining points with lines further improves accuracy. This work is done towards our intended goal of visually guided locomotion for either an autonomous robot or human.


International Joint Conference on Computer Vision, Imaging and Computer Graphics | 2015

Fusing Intertial Data with Vision for Enhanced Image Understanding

Osian Haines; David R. Bull; Jeremy F. Burn

In this paper we show that combining knowledge of the orientation of a camera with visual information can be used to improve the performance of semantic image segmentation. This is based on the assumption that the direction in which a camera is facing acts as a prior on the content of the images it creates. We gathered egocentric video with a camera attached to a head-mounted display, and recorded its orientation using an inertial sensor. By combining orientation information with typical image descriptors, we show that segmentation of individual images improves in accuracy compared with vision alone, from 61 % to 71 % over six classes. We also show that this method can be applied to both point and line based features from the image, and that these can be combined together for further benefits. Our resulting system would have applications in autonomous robot locomotion and guiding visually impaired humans.


british machine vision conference | 2014

Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video

Dima Damen; Teesid Leelasawassuk; Osian Haines; Andrew D Calway; Walterio W. Mayol-Cuevas

We present a fully unsupervised approach for the discovery of i) task relevant objects and ii) how these objects have been used. A Task Relevant Object (TRO) is an object, or part of an object, with which a person interacts during task performance. Given egocentric video from multiple operators, the approach can discover objects with which the users interact, both static objects such as a coffee machine as well as movable ones such as a cup. Importantly, we also introduce the term Mode of Interaction (MOI) to refer to the different ways in which TROs are used. Say, a cup can be lifted, washed, or poured into. When harvesting interactions with the same object from multiple operators, common MOIs can be found. Setup and Dataset: Using a wearable camera and gaze tracker (Mobile Eye-XG from ASL), egocentric video is collected of users performing tasks, along with their gaze in pixel coordinates. Six locations were chosen: kitchen, workspace, laser printer, corridor with a locked door, cardiac gym and weight-lifting machine. The Bristol Egocentric Object Interactions Dataset is publically available .


british machine vision conference | 2014

You-Do, I-Learn: Discovering Task Relevant Objects and their Modes of Interaction from Multi-User Egocentric Video.

Dima Damen; Teesid Leelasawassuk; Osian Haines; Andrew D Calway; Walterio W. Mayol-Cuevas


international conference on pattern recognition applications and methods | 2012

ESTIMATING PLANAR STRUCTURE IN SINGLE IMAGES BY LEARNING FROM EXAMPLES

Osian Haines; Andrew D Calway


international conference on computer vision theory and applications | 2015

Using Inertial Data to Enhance Image Segmentation

Osian Haines; David R. Bull; Jeremy F. Burn

Collaboration


Dive into the Osian Haines's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge