Bojian Liang
University of York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bojian Liang.
intelligent robots and systems | 2001
Nick Pears; Bojian Liang
We describe a method of mobile robot monocular visual navigation, which uses multiple visual cues to detect and segment the ground plane in the robots field of view. Corner points are tracked through an image sequence and grouped into coplanar regions using a method which we call an H-based tracker. The H-based tracker employs planar homographies and is initialised by 5-point planar projective invariants. This allows us to detect ground plane patches and the colour within such patches is subsequently modelled. These patches are grown by colour classification to give a ground plane segmentation, which is then used as an input to a new variant of the artificial potential field algorithm.
international conference on robotics and automation | 2002
Bojian Liang; Nick Pears
We introduce three new results, which allow homographies of the ground plane to support visual navigation functions for mobile robots using uncalibrated cameras. Firstly, we illustrate how, for pure translation, a homography can be computed from just two pairs of corresponding corner features. Secondly, we show how, for pure translation, we can determine the height of corner features above the ground plane using the recovered homography and a construct based on the cross ratio. This allows us to detect points which can be driven over, as their height is measured to be close to zero, and points which are sufficiently high to drive under. Finally, we show how, in the case of general planar motion, homographies can be used to determine the rotation of the camera and robot.
Proceedings of the IEEE | 2005
Jim Austin; Robert I. Davis; Martyn Fletcher; Thomas W. Jackson; Mark Jessop; Bojian Liang; Andy Pasley
The use of search engines within the Internet is now ubiquitous. This work examines how Grid technology may affect the implementation of search engines by focusing on the Signal Data Explorer application developed within the Distributed Aircraft Maintenance Environment (DAME) project. This application utilizes advanced neural-network-based methods (Advanced Uncertain Reasoning Architecture (AURA) technology) to search for matching patterns in time-series vibration data originating from Rolls-Royce aeroengines (jet engines). The large volume of data associated with the problem required the development of a distributed search engine, where data is held at a number of geographically disparate locations. This work gives a brief overview of the DAME project, the pattern marching problem, and the architecture. It also describes the Signal Data Explorer application and provides an overview of the underlying search engine technology and its use in the aeroengine health-monitoring domain.
Neural Networks | 2008
Martyn Fletcher; Bojian Liang; Leslie S. Smith; Alastair Knowles; Thomas W. Jackson; Mark Jessop; Jim Austin
In the study of information flow in the nervous system, component processes can be investigated using a range of electrophysiological and imaging techniques. Although data is difficult and expensive to produce, it is rarely shared and collaboratively exploited. The Code Analysis, Repository and Modelling for e-Neuroscience (CARMEN) project addresses this challenge through the provision of a virtual neuroscience laboratory: an infrastructure for sharing data, tools and services. Central to the CARMEN concept are federated CARMEN nodes, which provide: data and metadata storage, new, thirdparty and legacy services, and tools. In this paper, we describe the CARMEN project as well as the node infrastructure and an associated thick client tool for pattern visualisation and searching, the Signal Data Explorer (SDE). We also discuss new spike detection methods, which are central to the services provided by CARMEN. The SDE is a client application which can be used to explore data in the CARMEN repository, providing data visualization, signal processing and a pattern matching capability. It performs extremely fast pattern matching and can be used to search for complex conditions composed of many different patterns across the large datasets that are typical in neuroinformatics. Searches can also be constrained by specifying text based metadata filters. Spike detection services which use wavelet and morphology techniques are discussed, and have been shown to outperform traditional thresholding and template based systems. A number of different spike detection and sorting techniques will be deployed as services within the CARMEN infrastructure, to allow users to benchmark their performance against a wide range of reference datasets.
international conference on conceptual structures | 2011
Jim Austin; Thomas W. Jackson; Martyn Fletcher; Mark Jessop; Bojian Liang; Mike Weeks; Leslie S. Smith; Colin Ingram; Paul Watson
Abstract The CARMEN (Code, Analysis, Repository and Modelling for e-Neuroscience) system [1] provides a web based portal platform through which users can share and collaboratively exploit data, analysis code and expertise in neuroscience. The system has been beendeveloped in the UK and currently supports 200 hundred neuroscientists working in a Virtual Environment with an initial focus on electrophysiology data. The proposal here is that the CARMEN system provides an excellent base from which to develop an ‘executable paper’ system. CARMEN has been built by York and Newcastle Universities and is based on over 10 years experience in the construction of eScience based distributed technology. CARMEN started four years ago involving 20 scientific investigators (neuroscientists and computer scientists) at 11 UK Universities (www.CARMEN.org.uk). The project is supported for another 4 years at York and Newcastle, along with a sister project to take the underlying technology and pilot it as a UK platform for supporting the sharing of research outputs in a generic way. An entirely natural extension to the CARMEN system would be its alignment with a publications repository. The CARMEN system is operational on the domain https://portal.CARMEN.org.uk, where it is possible to request a login to try out the system.
Pattern Recognition Letters | 2006
Zezhi Chen; Nick Pears; Bojian Liang
A method of measuring the height of any feature above a reference plane from a pair of uncalibrated images, separated by a (near) pure translation is presented. The output of the algorithm is a feature height, expressed as a fraction of the height of the camera above the reference plane. There are three contributions. Firstly a robust method of computing the dual epipole or focus of expansion (FOE) under pure translation is presented. Secondly, a novel reciprocal-polar (RP) image rectification scheme is presented, which allows planar image motion, expressed as a planar homography, to be accurately detected and recovered by 1D correlation. The technique can work even when there are no corner features on the reference plane and even over large image distortions caused by large camera motion, which would cause correlation techniques in the original image space to fail. Thirdly, we present a projective construct to enable measurement of the relative (or affine) feature height. Results show that our algorithm performs very well against outliers and noise. The mean of absolute error is 1.8mm, and the mean of relative error is only 0.13% with two outliers removed.
EURASIP Journal on Advances in Signal Processing | 2005
Nick Pears; Bojian Liang; Zezhi Chen
We propose a method to segment the ground plane from a mobile robots visual field of view and then measure the height of nonground plane features above the mobile robots ground plane. Thus a mobile robot can determine what it can drive over, what it can drive under, and what it needs to manoeuvre around. In addition to obstacle avoidance, this data could also be used for localisation and map building. All of this is possible from an uncalibrated camera (raw pixel coordinates only), but is restricted to (near) pure translation motion of the camera. The main contributions are (i) a novel reciprocal-polar (RP) image rectification, (ii) ground plane segmentation by sinusoidal model fitting in RP-space, (iii) a novel projective construction for measuring affine height, and (iv) an algorithm that can make use of a variety of visual features and therefore operate in a wide variety of visual environments.
cluster computing and the grid | 2006
Martyn Fletcher; Thomas W. Jackson; Mark Jessop; Bojian Liang; Jim Austin
We describe a high performance grid based signal search tool for distributed diagnostic applications developed in conjunction with Rolls-Royce plc for civil aero engine condition monitoring applications. With the introduction of advanced monitoring technology into engineering systems, healthcare, etc., the associated diagnostic processes are increasingly required to handle and consider vast amounts of data. An exemplar of such a diagnosis process was developed during the DAME project, which built a proof of concept demonstrator to assist in the enhanced diagnosis and prognosis of aero-engine conditions. In particular it has shown the utility of an interactive viewing and high performance distributed search tool (the signal data explorer) in the aeroengine diagnostic process. The viewing and search techniques are equally applicable to other domains. The signal data explorer and search services have been demonstrated on the Worldwide Universities Network to search distributed databases of electrocardiograph data.
international conference on pattern recognition | 2004
Bojian Liang; Zezhi Chen; Nick Pears
A method of visual metrology from uncalibrated cameras is proposed in this paper, whereby a camera, which captures two images separated by a (near) pure translation, becomes a height measurement device. A novel projective construction allows accurate affine height measurements to be made relative to a reference plane, given that the reference plane planar homography between the two views can be accurately recovered. To this end a planar homography estimation method is presented, which is highly accurate and robust and based on a novel reciprocal-polar (RP) image rectification. The absolute height of any pixel or feature above the reference plane can be obtained from this affine height once the cameras distance to the reference plane, or the height of a second measurement in the image is specified. Results from our data show a mean absolute error of 6.9 mm and with two outliers removed this falls to 1.5 mm.
Image and Vision Computing | 2006
Zezhi Chen; Nick Pears; Bojian Liang
Our obstacle detection method is applicable to deliberative translation motion of a mobile robot and, in such motion, the epipole of each image of an image pair is coincident and termed the focus of expansion (FOE). We present an accurate method for computing the FOE and then we use this to apply a novel rectification to each image, called a reciprocal-polar (RP) rectification. When robot translation is parallel to the ground, as with a mobile robot, ground plane image motion in RP-space is a pure shift along an RP image scan line and hence can be recovered by a process of 1D correlation, even over large image displacements and without the need for corner matches. Furthermore, we show that the magnitude of these shifts follows a sinusoidal form along the second (orientation) dimension of the RP image. This gives the main result that ground plane motion over RP image space forms a 3D sinusoidal manifold. Simultaneous ground plane pixel grouping and recovery of the ground plane motion thus amounts to finding the FOE and then robustly fitting a 3D sinusoid to shifts of maximum correlation in RP space. The phase of the recovered sinusoid corresponds to the orientation of the vanishing line of the ground plane and the amplitude is related to the magnitude of the robot/camera translation. Recovered FOE, vanishing line and sinusoid amplitude fully define the ground plane motion (homography) across a pair of images and thus obstacles and ground plane can be segmented without any explicit knowledge of either camera parameters or camera motion.