Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christine M. Onyango is active.

Publication


Featured researches published by Christine M. Onyango.


Journal of The Optical Society of America A-optics Image Science and Vision | 2000

Shadow-invariant classification for scenes illuminated by daylight.

John A. Marchant; Christine M. Onyango

A physics-based method for shadow compensation in scenes illuminated by daylight is proposed. If the daylight is represented by a simplified form of the blackbody law and the camera filters are of infinitely narrow bandwidth, the relationship between red/blue (rm) and green/blue (gm) ratios as the blackbodys temperature changes is a simple power law where the exponent is independent of the surface reflectivity. When the CIE daylight model is used instead of the blackbody and finite bandwidths for the camera are assumed, it is shown that the power law still holds with a slight change to the exponent. This means that images can be transformed into a map of rm/gmA and then thresholded to yield a shadow-independent classification. Exponent A can be precalculated from the CIE daylight model and the camera filter characteristics. Results are shown for four outdoor images that contain sunny and shadowed parts with vegetation and background. It is shown that the gray-level distributions of the pixels in the transformed images are quite similar for a given component whether or not it is in shadow. The transformation leads to bimodal histograms from which thresholds can easily be selected to give good classifications.


Computers and Electronics in Agriculture | 2003

Comparison of a Bayesian classifier with a multilayer feed-forward neural network using the example of plant/weed/soil discrimination

John A. Marchant; Christine M. Onyango

The feed-forward neural network has become popular as a classification method in agricultural engineering as well as in other applications. This is despite the fact that statistically based alternatives have been in existence for a considerable time. This paper compares a Bayesian classifier with a multilayer feed-forward neural network in a task from the area of discriminating plants, weeds, and soil in colour images. The principles behind and the practical implementation of Bayesian classifiers and neural networks are discussed as are the advantages and problems of each. Experimental tests are conducted using the same set of training and test data for each classifier. Because the Bayesian classifier is optimal in the sense of total misclassification error, it should outperform the neural network. It is shown that this is generally the case. There are significant similarities in the performance of each classifier. Understanding why this should be the case gives insight into the operation of each classifier and so the paper explores this aspect. In this work, the Bayesian classifier is implemented as a look-up table. Thus any probability function can be represented and the decision surfaces can be of any shape, i.e. the classifier is not restricted to a linear form. On the other hand, it does require a relatively large amount of memory. However, memory requirement is no longer such a major issue in modern computing. Thus, it is concluded that if the number of features is small enough to require a feasible amount of storage, a Bayesian classifier is preferred over a feed-forward neural network.


Image and Vision Computing | 2001

Physics-based colour image segmentation for scenes containing vegetation and soil

Christine M. Onyango; John A. Marchant

Colour segmentation of images containing vegetation and soil is the theme of this work. Physics-based reflection models are used to develop an algorithm for separating object pixel clusters in the three-dimensional red, green and blue colour space. The dichromatic reflection model that is used as the basis for this algorithm, defines a plane in which the pixels from an object of a given colour will lie. The illuminant colour and the intrinsic body colour of the object determine the parameters of the dichromatic plane. Scenes containing objects that differ in colour, form multiple dichromatic planes in RGB space but the illuminant vector is common to all planes. The algorithm therefore counts the number of image pixels that intersect with a plane formed by the illuminant vector and its normal as the plane is rotated around the illuminant vector. In images comprising two objects that differ in colour, vegetation and soil for example, the method produces a bimodal histogram where the two modes correspond to clusters of pixels from the two objects. Data on plant reflectance spectra can be used to identify which of the clusters is vegetation. The performance of the method is assessed using receiver operator characteristic curves and the probability of misclassification is measured.


Computers and Electronics in Agriculture | 2001

Evaluation of an imaging sensor for detecting vegetation using different waveband combinations

John A. Marchant; Hans Jørgen Andersen; Christine M. Onyango

Abstract This paper uses data collected from an earlier reported imaging sensor to investigate the classification of vegetation from background. The sensor uses three wavebands, red; green; and near infra-red (NIR). A classification method (the alpha method) is introduced which is based on a model of the light source and the reflecting surface. The alpha method is compared with two ratio methods of classification (red/NIR and red/green) and two single waveband methods of classification (NIR and green intensity). The Receiver Operating Characteristic Curve (ROC) is used to evaluate the classifications on realistic test images. ROCs compare the ‘true positive ratio’ with ‘the false positive ratio’ as the classification parameter varies. The area under the ROC gives a measure of how well an algorithm performs. Measurements on the ROC show that the alpha and ratio methods all perform reasonably well with the red/green ratio giving slightly poorer performance than the alpha method and the red/NIR ratio. The single waveband methods perform significantly less well with green intensity easily the worst. The alpha and ratio methods have ‘best’ thresholds that correspond with detectable histogram features when there is a significant amount of vegetation in the image. The physical basis for the alpha method means that there is a detectable mode in the histogram that corresponds with the ‘best’ threshold even when there is only a small amount of vegetation. The single waveband methods do not produce histograms, which can easily be analysed, and so their use should be confined to simple images.


Cybernetics and Systems | 2004

DEALING WITH COLOR CHANGES CAUSED BY NATURAL ILLUMINATION IN OUTDOOR MACHINE VISION

John A. Marchant; Christine M. Onyango

This paper addresses the problem of color changes caused by variation in natural illumination, both in intensity and spectral content. Dealing with intensity changes is relatively easy using color ratios, whereas changes in spectral content present more of a problem. Our previous work has shown that natural daylight can be represented in a form whereby it is possible to derive a monochrome image from a 3-band color image that is invariant to illumination spectral changes. Here we illustrate the method with an application from one of our important problem domains, precision treatment of agricultural crops using cybernetic machinery. We collect images of two components, vegetation and soil, and derive statistical models of the pixels that make up the images for three illumination conditions. We show that the models for the same component are quite different when using color ratios but much more similar when using our invariant transformation as the illumination changes. We conclude that cybernetic applications of outdoor machine vision must be able to deal with illumination spectral changes and that our invariant transformation is a suitable method to do this.


Precision Agriculture | 2005

Image Processing Performance Assessment Using Crop Weed Competition Models

Christine M. Onyango; John A. Marchant; Andrea Grundy; Kath Phelps; Richard Reader

Abstract.Precision treatment of both crops and weeds requires the accurate identification of both types of plant. However both identification and treatment methods are subject to error and it is important to understand how misclassification errors affect crop yield. This paper describes the use of a conductance growth model to quantify the effect of misclassification errors caused by an image analysis system.Colour, morphology and knowledge about planting patterns have been combined, in an image analysis algorithm, to distinguish crop plants from weeds. As the crop growth stage advances, the algorithm is forced to trade improved crop recognition for reduced weed classification. Depending on the chosen method of weed removal, misclassification may result in inadvertent damage to the crop or even complete removal of crop plants and subsequent loss of yield. However incomplete removal of weeds might result in competition and subsequent yield reduction. The plant competition model allows prediction of final crop yield after weed or crop removal. The competition model also allows the investigation of the impact on yield of misclassification in the presence of both aggressive and benign weed types. The competition model and the image analysis algorithm have been linked successfully to investigate a range of misclassification scenarios in scenes containing cabbage plants.


british machine vision conference | 2000

Colour invariance at a pixel

Graham D. Finlayson; Steven D. Hordley; John A. Marchant; Christine M. Onyango

This paper addresses the question of what can be said about the colours in images that is independent of illumination. We make two main assumptions: Firstly, the illumination can be characterised as Planckian (a realistic assumption for most real scenes). Secondly, the camera behaves as if it were equipped with narrow band sensors (true for a large number of cameras). The resulting physics-based method results in a transformation of the original colour image to a grey-scale one which does not vary with illumination. We give results showing invariance under a range of illumination conditions.


International Journal of Imaging Systems and Technology | 2000

Design and operation of an imaging sensor for detecting vegetation

Hans Jørgen Andersen; Christine M. Onyango; John A. Marchant

There is a need to sense vegetation from ground‐based vehicles so that plants can be treated in a selective way, thus saving on crop treatment measures. This paper introduces a sensor for detecting vegetation under natural illumination that uses three filters, red, green, and near infra‐red (NIR), with a monochrome charge couple device (CCD) camera. The sensor design and the data handling are based on the physics of illumination, reflection from the vegetation, transmission through the filters, and interception at the CCD. In order to model the spectral characteristics of the daylight in the NIR, we extend an existing standard using a black body model. We derive suitable filters, develop a methodology for balancing the sensitivity of each channel, and collect image data for a range of illumination conditions and two crop types. We present results showing that the sensor behaves as we predict. We also show that clusters form in a measurement space consisting of the red and NIR chromaticities in accordance with their expected position and shape. Presentation in this space gives a good separation of the vegetation and nonvegetation clusters, which will be suitable for physically based classification methods to be developed in future work.


Image and Vision Computing | 2003

Model-based control of image acquisition

John A. Marchant; Christine M. Onyango

Abstract We propose two methods for controlling the acquisition of images using a camera/digitiser combination which seek to make good use of the dynamic range of the digitiser. The system controls are the black and white reference levels of the digitiser, and the exposure time of the CCD sensor. We use the grey level histogram to characterise the level of control. Both methods use models of the camera/digitiser and of the grey level distribution in the scene. These allow control values that will achieve a given result to be predicted from the current grab and used on the next one. Thus the methods use feed forward control, taking advantage of the models to achieve a fast response. The first method, pragmatic, attempts to adjust the controls to achieve target values of histogram position and scale. The second method, information theoretic seeks to maximise the information content of the histogram as measured by the entropy. An advantage of the information theoretic method is that it produces a single measure of performance. This we use in a strategy for including the exposure variable in the control system. Having a single measure avoids the difficult problem of choosing rather arbitrary weighting factors for the position and scale errors in the pragmatic method. We test both methods using stored images and simulating various grab conditions. Both methods perform well, resulting in effective control values from simulated grabs containing significant saturation. We test the second method on line using real grabs and show fast and accurate recovery from disturbances of illumination and scene content.


intelligent robots and systems | 1990

Image-guided robotics for the automation of micropropagation

Robin D. Tillett; F. R. Brown; Nigel J. B. McFarlane; Christine M. Onyango; P. F. Davis; John A. Marchant

Describes a robots workstation used as a test bed for evaluating tools and techniques for automating the technique of micropropagation. A number of novel plant manipulation tools have been developed. These tools are mounted on a Cartesian geometry robot driven by stepper motors to perform the required harvesting, cutting and planting movements. Overall robot direction is provided by a program which is updated according to the shape, position and orientation of a particular microplant. Information from a video camera can either be displayed on a monitor and assessed by an operator, or can be analysed by a vision processing system. This paper describes the robot workstation and discusses the various vision analysis tasks necessary for automatic robot guidance.<<ETX>>

Collaboration


Dive into the Christine M. Onyango's collaboration.

Top Co-Authors

Avatar

John A. Marchant

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robin D. Tillett

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge