Katia Lebart
Heriot-Watt University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Katia Lebart.
international conference on pattern recognition | 2004
F. Imbault; Katia Lebart
Support vector machines (SVMs) are both mathematically well-funded and efficient in a large number of real-world applications. However, the classification results highly depend on the parameters of the model: the scale of the kernel and the regularization parameter. Estimating these parameters is referred to as tuning. Tuning requires to estimate the generalization error and to find its minimum over the parameter space. Classical methods use a local minimization approach. After empirically showing that the tuning of parameters presents local minima, we investigate in this paper the use of global minimization techniques, namely genetic algorithms and simulated annealing. This latter approach is compared to the standard tuning frameworks and provides a more reliable tuning method.
IEEE Journal of Oceanic Engineering | 2003
Katia Lebart; Chris Smith; Emanuele Trucco; David M. Lane
It is often the case that only a few sparse sequences of long videos from scientific underwater surveys actually contain important information for the expert. Locating such sequences is time consuming and tedious. A system that automatically detects those critical parts, online or during post-mission tape analysis, would alleviate the expert workload and improve data exploitation. In this paper, a methodology for evaluating the performance of such a system on real data is presented. Interesting sequences are started by changes of visual context. An algorithm to detect significant context changes in benthic videos in real time has been presented by Lebart et al. in 2000. It is used as an illustration for this methodology - its performance is studied and benchmarked on real underwater data, ground truthed by an expert biologist. Various issues relating to the complexity of the problems of automatically analyzing underwater video are also discussed.
international conference on advances in pattern recognition | 2005
D. B. Redpath; Katia Lebart
It is possible to reduce the error rate of a single classifier using a classifier ensemble. However, any gain in performance is undermined by the increased computation of performing classification several times. Here the AdaboostFS algorithm is proposed which builds on two popular areas of ensemble research: Adaboost and Ensemble Feature Selection (EFS). The aim of AdaboostFS is to reduce the number of features used by each base classifer and hence the overall computation required by the ensemble. To do this the algorithm combines a regularised version of Boosting AdaboostReg [1] with a floating feature search for each base classifier. AdaboostFS is compared using four benchmark data sets to AdaboostAll, which uses all features and to AdaboostRSM, which uses a random selection of features. Performance is assessed based on error rate, ensemble error and diversity, and the total number of features used for classification. Results show that AdaboostFS achieves a lower error rate and higher diversity than AdaboostAll, and achieves a lower error rate and comparable diversity to AdaboostRSM. However, over the other methods AdaboostFS produces a significant reduction in the number of features required for classification in each base classifier and the entire ensemble.
Pattern Recognition Letters | 2004
Miguel Arredondo; Katia Lebart; David M. Lane
Motion estimation is a key problem in the analysis of image sequences. From a sequence of images we can only estimate an approximation of the image motion field called optical flow. We propose to improve optical flow estimation by including information from images of textural features. We compute the optical flow from intensity and textural images from first-order derivatives, then combine estimates using the spatial gradient as confidence measure. Experimental results with images for which the ground-truth optical flow is known show clearly that the estimate improves by including estimates from textural images. Experiments with several underwater images also show a qualitative improvement.
oceans conference | 2000
Katia Lebart; Emanuele Trucco; D.M. Lane
It is often the case that only sparse sequences of videos from scientific underwater surveys actually contain important information for the expert. A system automatically detecting those critical parts, particularly during the post-mission tape analysis, would alleviate the expert work load and improve data exploitation. The authors present a novel set of algorithms to detect in real time significant context changes in benthic videos. The detectors presented rely on an unsupervised image classification scheme: the time changes in the image contents are analyzed in the feature space. The algorithms are explained in detail, and experimental results with real underwater images reported. Various issues related to the complexity of the problem of automatically analysing underwater videos are also discussed.
Passive millimetre-wave and terahertz imaging and technology. Conference | 2004
Christopher D. Haworth; Beatriz Grafulla Gonzalez; Mathilde Tomsin; Roger Appleby; Peter R. Coward; Andrew R. Harvey; Katia Lebart; Yvan Petillot; Emanuele Trucco
Video-frame-rate millimetre-wave imaging has recently been demonstrated with a quality similar to that of a low-quality uncooled thermal imager. In this paper we will discuss initial investigations into the transfer of image processing algorithms from more mature imaging modalities to millimetre-wave imagery. The current aim is to develop body segmentation algorithms for use in object detection and analysis. However, this requires a variety of image processing algorithms from different domains, including image de-noising, segmentation and motion tracking. This paper focuses on results from the segmentation of a body from the millimetre-wave images and a qualitative comparison of different approaches is presented. Their performance is analysed and any characteristics which enhance or limit their application are discussed. While it is possible to apply image processing algorithms developed for the visible-band directly to millimetre-wave images, the physics of the image formation process is very different. This paper discusses the potential for exploiting an understanding of the physics of image formation in the image segmentation process to enhance classification of scene components and, thereby, improve segmentation performance. This paper presents some results from a millimetre-wave image formation simulator, including synthetic images with multiple objects in the scene.
Pattern Recognition Letters | 2006
Beatriz Grafulla-González; Katia Lebart; Andrew R. Harvey
We describe the physical-optics modelling of a millimetre-wave imaging system intended to enable automated detection of threats hidden under clothes. This paper outlines the theoretical basis of the formation of millimetre-wave images and provides the model of the simulated imaging system. Results of simulated images are presented and the validation with real ones is carried out. Finally, we present a brief study of the potential materials to be classified in this system.
international conference on multiple classifier systems | 2005
D. B. Redpath; Katia Lebart
This paper presents a study of the Boosting Feature Selection (BFS) algorithm [1], a method which incorporates feature selection into Adaboost. Such an algorithm is interesting as it combines the methods studied by Boosting and ensemble feature selection researchers. Observations are made on generalisation, weighted error and error diversity to compare the algorithms performance to Adaboost while using a nearest mean base learner. Ensemble feature prominence is proposed as a stop criterion for ensemble construction. Its quality assessed using the former performance measures. BFS is found to compete with Adaboost in terms of performance, despite the reduced feature description for each base classifer. This is explained using weighted error and error diversity. Results show the proposed stop criterion to be useful for trading ensemble performance and complexity.
international conference on pattern recognition | 2005
Beatriz Grafulla-González; Christopher D. Haworth; Andrew R. Harvey; Katia Lebart; Yvan Petillot; Yves de Saint-Pern; Mathilde Tomsin; Emanuele Trucco
The ATRIUM project aims to the automatic detection of threats hidden under clothes using millimetre-wave imaging. We describe a simulator of realistic millimetre-wave images and a system for detecting metallic weapons automatically. The latter employs two stages, detection and tracking. We present a detector for metallic objects based on mixture models, and a target tracker based on particle filtering. We show convincing, simulated millimetre-wave images of the human body with and without hidden threats, including a comparison with real images, and very good detection and tracking performance with eight real sequences. (International Workshop on Pattern Recognition for Crime Prevention, Security and Surveillance)
europe oceans | 2005
David B. Redpath; Katia Lebart; Chris Smith
This paper presents an experimental protocol developed for the design, performance estimation and comparison of underwater video classifier systems. Such systems have to be designed using application data that is small, sparse and extremely variable. The proposed protocol uses outlier rejection, data pairing, Bootstrap performance estimation and hypothesis testing to achieve a robust performance estimate and comparison between classifier designs. The protocol is demonstrated and assessed on an application experiment. The application involves the design of a classification system for the automated detection of trawling marks from mission video. Two systems are proposed using selective and geometric feature types and an ensemble classifier. The protocol robustly identifies differences between the two proposed system designs using error and discrimination rates. Overall the geometric feature system is chosen as the final system. The protocol was also compared with other performance estimates and found to have the closest match to actual test data performance.