O. de Vel
James Cook University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by O. de Vel.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1997
Y. Mallet; Danny Coomans; J. Kautsky; O. de Vel
A major concern arising from the classification of spectral data is that the number of variables or dimensionality often exceeds the number of available spectra. This leads to a substantial deterioration in performance of traditionally favoured classifiers. It becomes necessary to decrease the number of variables to a manageable size, whilst, at the same time, retaining as much discriminatory information as possible. A new and innovative technique based on adaptive wavelets, which aims to reduce the dimensionality and optimize the discriminatory information is presented. The discrete wavelet transform is utilized to produce wavelet coefficients which are used for classification. Rather than using one of the standard wavelet bases, we generate the wavelet which optimizes specified discriminant criteria.
Chemometrics and Intelligent Laboratory Systems | 1996
Y. Mallet; Danny Coomans; O. de Vel
There are basically two strategies which can be used to discriminate high dimensional spectral data. It is common practice to first reduce the dimensionality by some feature extraction preprocessing method, and then use an appropriate (low-dimensional) classifier. An alternative procedure is to use a (high-dimensional) classifier which is capable of handling a large number of variables. We introduce some novel dimension reducing techniques as well as low and high dimensional classifiers which have evolved only recently. The discrete wavelet transform is introduced as a method for extracting features. The Fourier transform, principal component analysis, stepwise strategies, and other variable selection methods for reducing the dimensionality are also discussed. The low dimensional classifier, flexible discriminant analysis is a new method which combines nonparametric regression with Fishers linear discriminant analysis to achieve nonlinear decision boundaries. We also discuss some of the time honoured techniques such as Fishers linear discriminant analysis, and the Bayesian linear and quadratic methods. The modern high dimensional classifiers which we report on are penalized discriminant analysis and regularized discriminant analysis. Each of the classifiers and a selection of dimensionality reducing techniques are applied to the discrimination of seagrass spectral data. Results indicate a promising future for wavelets in discriminant analysis, and the recently introduced flexible and penalized discriminant analysis. Regularized discriminant analysis also performs well.
international conference on algorithms and architectures for parallel processing | 1995
Ling Shi; O. de Vel; Jiannong Cao; M. Cosnard
Monitoring program execution in a distributed system can generate large quantities of data, and the collection and processing of the monitoring data is one of the primary factors that contribute to the complexity of distributed monitoring. In order to reduce such complexity, a hierarchical distributed performance monitoring system has been developed. In this paper we describe an optimization method to improve the efficiency of the monitoring system. By considering the topology used by the application program and the distribution of monitoring records, an optimized grouping can be determined to obtain an improved performance for the monitoring system. The experiments presented in this paper have demonstrated such an improvement in performance.<<ETX>>
SIAM Journal on Scientific Computing | 2000
S. Aeberhard; O. de Vel; Danny Coomans
Variable selection is an important technique for reducing the dimensionality in multivariate predictive discriminant analysis and classification. In the past, direct evaluation of the subsets by means of a classifier has been computationally too expensive, rendering necessary the use of heuristic measures of class separation, such as Wilks
international conference on pattern recognition | 1998
O. de Vel; S. Aeberhard
\Lambda
Data Handling in Science and Technology | 2000
Y. Mallet; O. de Vel; Danny Coomans
or the Mahalanobis distance between class means. We present new fast algorithms for stepwise variable selection based on quadratic and linear classifiers with time complexities which, to within a constant, are the same as those applying measures of class separation. Comparing the new algorithms to previous implementations of classifier-based variable selection, we show that dramatic speed-ups are achieved.
Data Handling in Science and Technology | 2000
Y. Mallet; Danny Coomans; O. de Vel
View-based recognition is a simple, relatively robust method for object recognition. Current techniques use small, simplistic object databases requiring, in many cases, large processing training and/or recognition times. We propose an extension to the view-based object recognition paradigm using lines of 2D image views together with a k-NN classifier that achieves high generalisation recognition rates with reduced computational times compared with other more elaborate recognition algorithms.
international conference on pattern recognition | 1998
S. Aeberhard; O. de Vel
This chapter discusses various aspects of the wavelet transform when applied to continuous functions or signals. Wavelets form a set of basis functions, which linearly combine to represent a function f(t), from the space of square integrable functions L 2 (R). Functions in this space have finite energy. Because wavelet basis functions linearly combine to represent functions from L 2 (R) they are from this space as well. The reason for choosing this space largely relates to the properties of the L 2 norm. The wavelet basis functions are derived by translating and dilating one basic wavelet, called a “mother wavelet.” The dilated and translated wavelet basis functions are called “children wavelets.” The wavelet coefficients are the coefficients in the expansion of the wavelet basis functions. The wavelet transform is the procedure for computing the wavelet coefficients. The wavelet coefficients convey information about the weight that a wavelet basis function contributes to the function. The chapter introduces the continuous wavelet transform and discusses the conditions required for invertibility and the inverse or reconstruction formula.
international symposium on neural networks | 1995
O. de Vel; S. Wangsuya; Danny Coomans
This chapter provides an overview of wavelet packet transforms and best basis algorithms. The wavelet packet transform (WPT) is an extension of the discrete wavelet transform (DWT). The basic difference between the wavelet packet transform and the wavelet transform relates to which coefficients are passed through the low-pass and high-pass filters. With the wavelet transform, the scaling coefficients are filtered through each of these filters. With the WPT, not only do the scaling coefficients pass through the low-pass and high-pass filters, but so do the wavelet coefficients. Because both the scaling and wavelet coefficients are filtered there is a surplus of information stored in the WPT, which has a binary tree structure. An advantage of this redundant information is that it provides greater freedom in choosing an orthogonal basis. The best basis algorithm seeks a basis in the WPT, which optimizes some criterion function. Thus, the best basis algorithm is a task specific algorithm in that the particular basis is dependent on the role for which it is used.
Data Handling in Science and Technology | 2000
O. de Vel; Danny Coomans; Y. Mallett
Describes an image-based face recognition technique using line segments of 2D image views that achieves high generalisation recognition rates for rotations both in and out of the plane, is invariant to scaling, and has reduced execution times. Results show that the algorithm is superior compared with benchmark algorithms and is able to recognise test views in quasi real-time.