Tim Wilkin
Deakin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tim Wilkin.
Journal of intelligent systems | 2015
Tim Wilkin; Gleb Beliakov
Monotonicity with respect to all arguments is fundamental to the definition of aggregation functions. It is also a limiting property that results in many important nonmonotonic averaging functions being excluded from the theoretical framework. This work proposes a definition for weakly monotonic averaging functions, studies some properties of this class of functions, and proves that several families of important nonmonotonic means are actually weakly monotonic averaging functions. Specifically, we provide sufficient conditions for weak monotonicity of the Lehmer mean and generalized mixture operators. We establish weak monotonicity of several robust estimators of location and conditions for weak monotonicity of a large class of penalty‐based aggregation functions. These results permit a proof of the weak monotonicity of the class of spatial‐tonal filters that include important members such as the bilateral filter and anisotropic diffusion. Our concept of weak monotonicity provides a sound theoretical and practical basis by which (monotonic) aggregation functions and nonmonotonic averaging functions can be related within the same framework, allowing us to bridge the gap between these previously disparate areas of research.
Information Sciences | 2014
Gleb Beliakov; Miguel Pagola; Tim Wilkin
Abstract We present a new approach for defining similarity measures for Atanassov’s intuitionistic fuzzy sets (AIFS), in which a similarity measure has two components indicating the similarity and hesitancy aspects. We justify that there are at least two facets of uncertainty of an AIFS, one of which is related to fuzziness while other is related to lack of knowledge or non-specificity. We propose a set of axioms and build families of similarity measures that avoid counterintuitive examples that are used to justify one similarity measure over another. We also investigate a relation to entropies of AIFS, and outline possible application of our method in decision making and image segmentation.
ieee international conference on fuzzy systems | 2013
Tim Wilkin
Image reduction is a crucial task in image processing, underpinning many practical applications. This work proposes novel image reduction operators based on non-monotonic averaging aggregation functions. The technique of penalty function minimisation is used to derive a novel mode-like estimator capable of identifying the most appropriate pixel value for representing a subset of the original image. Performance of this aggregation function and several traditional robust estimators of location are objectively assessed by applying image reduction within a facial recognition task. The FERET evaluation protocol is applied to confirm that these non-monotonic functions are able to sustain task performance compared to recognition using non-reduced images, as well as significantly improve performance on query images corrupted by noise. These results extend the state of the art in image reduction based on aggregation functions and provide a basis for efficiency and accuracy improvements in practical computer vision applications.
Information Sciences | 2014
Gleb Beliakov; Tim Wilkin
Density-based means have been recently proposed as a method for dealing with outliers in the stream processing of data. Derived from a weighted arithmetic mean with variable weights that depend on the location of all data samples, these functions are not monotonic and hence cannot be classified as aggregation functions. In this article we establish the weak monotonicity of this class of averaging functions and use this to establish robust generalisations of these means. Specifically, we find that as proposed, the density based means are only robust to isolated outliers. However, by using penalty based formalisms of averaging functions and applying more sophisticated and robust density estimators, we are able to define a broader family of density based means that are more effective at filtering both isolated and clustered outliers.
international conference information processing | 2014
Tim Wilkin; Gleb Beliakov; Tomasa Calvo
Averaging behaviour of aggregation functions depends on the fundamental property of monotonicity with respect to all arguments. Unfortunately this is a limiting property that ensures that many important averaging functions are excluded from the theoretical framework. We propose a definition for weakly monotone averaging functions to encompass the averaging aggregation functions in a framework with many commonly used non-monotonic means. Weakly monotonic averages are robust to outliers and noise, making them extremely important in practical applications. We show that several robust estimators of location are actually weakly monotone and we provide sufficient conditions for weak monotonicity of the Lehmer and Gini means and some mixture functions. In particular we show that mixture functions with Gaussian kernels, which arise frequently in image and signal processing applications, are actually weakly monotonic averages. Our concept of weak monotonicity provides a sound theoretical and practical basis for understanding both monotone and non-monotone averaging functions within the same framework. This allows us to effectively relate these previously disparate areas of research and gain a deeper understanding of averaging aggregation methods.
IEEE Transactions on Fuzzy Systems | 2015
Gleb Beliakov; Gang Li; Huy Quan Vu; Tim Wilkin
Certain tasks in image processing require the preservation of fine image details, while applying a broad operation to the image, such as image reduction, filtering, or smoothing. In such cases, the objects of interest are typically represented by small, spatially cohesive clusters of pixels which are to be preserved or removed, depending on the requirements. When images are corrupted by the noise or contain intensity variations generated by imaging sensors, identification of these clusters within the intensity space is problematic as they are corrupted by outliers. This paper presents a novel approach to accounting for spatial organization of the pixels and to measuring the compactness of pixel clusters based on the construction of fuzzy measures with specific properties: monotonicity with respect to the cluster size; invariance with respect to translation, reflection, and rotation; and discrimination between pixel sets of fixed cardinality with different spatial arrangements. We present construction methods based on Sugeno-type fuzzy measures, minimum spanning trees, and fuzzy measure decomposition. We demonstrate their application to generating fuzzy measures on real and artificial images.
Knowledge Based Systems | 2014
Gleb Beliakov; Tomasa Calvo; Tim Wilkin
Monotonicity with respect to all arguments is fundamental to the definition of aggregation functions, which are one of the basic tools in knowledge-based systems. The functions known as means (or averages) are idempotent and typically are monotone, however there are many important classes of means that are non-monotone. Weak monotonicity was recently proposed as a relaxation of the monotonicity condition for averaging functions. In this paper we discuss the concepts of directional and cone monotonicity, and monotonicity with respect to majority of inputs and coalitions of inputs. We establish the relations between various kinds of monotonicity, and illustrate it on various examples. We also provide a construction method for cone monotone functions.
Information Sciences | 2015
Gleb Beliakov; Tomasa Calvo; Tim Wilkin
Weak monotonicity was recently proposed as a relaxation of the monotonicity condition for averaging aggregation, and weakly monotone functions were shown to have desirable properties when averaging data corrupted with outliers or noise. We extended the study of weakly monotone averages by analyzing their ? -transforms, and we established weak monotonicity of several classes of averaging functions, in particular Gini means and mixture operators. Mixture operators with Gaussian weighting functions were shown to be weakly monotone for a broad range of their parameters. This study assists in identifying averaging functions suitable for data analysis and image processing tasks in the presence of outliers.
ieee international conference on fuzzy systems | 2014
Gleb Beliakov; Gang Li; Huy Quan Vu; Tim Wilkin
Pixel-scale fine details are often lost during image processing tasks such as image reduction and filtering. Block or region based algorithms typically rely on averaging functions to implement the required operation and traditional function choices struggle to preserve small, spatially cohesive clusters of pixels which may be corrupted by noise. This article proposes the construction of fuzzy measures of cluster compactness to account for the spatial organisation of pixels. We present two construction methods (minimum spannning trees and fuzzy measure decomposition) to generate measures with specific properties: monotonicity with respect to cluster size; invariance with respect to translation, reflection and rotation; and, discrimination between pixel sets of fixed cardinality with different spatial arrangements. We apply these measures within a non-monotonic mode-like averaging function used for image reduction and we show that this new function preserves pixel-scale structures better than existing monotonie averages.
pacific rim international conference on artificial intelligence | 2000
Tim Wilkin; Ann E. Nicholson
Dynamic Belief Networks (DBNs) have been used for the monitoring and control of stochastic dynamical processes where it is crucial to provide a response in real-time. DEN transition functions are typically specified as conditional probability distributions over a constant time interval. When these functions are used to model dynamic systems with observations that occur at irregular intervals, both exact and approximate DBN inference algorithms are inefficient. This is because the computation of the posterior distribution at an arbitrary time in the future involves repeated application of the fixed time transition model. We draw on research from mathematics and theoretical physics that shows the dynamics inherent to a Markov model can be described as a diffusion process. These systems can be modelled using the Fokker-Planck equation, the solutions of which are the transition functions of the system for arbitrary length time intervals. We show that using these transition functions in a DBN inference algorithm gives significant computational savings compared to the traditional constant time-step model.