Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michele Donini is active.

Publication


Featured researches published by Michele Donini.


Neurocomputing | 2015

EasyMKL: A scalable multiple kernel learning algorithm

Fabio Aiolli; Michele Donini

Abstract The goal of Multiple Kernel Learning (MKL) is to combine kernels derived from multiple sources in a data-driven way with the aim to enhance the accuracy of a target kernel machine. State-of-the-art methods of MKL have the drawback that the time required to solve the associated optimization problem grows (typically more than linearly) with the number of kernels to combine. Moreover, it has been empirically observed that even sophisticated methods often do not significantly outperform the simple average of kernels. In this paper, we propose a time and space efficient MKL algorithm that can easily cope with hundreds of thousands of kernels and more. The proposed method has been compared with other baselines (random, average, etc.) and three state-of-the-art MKL methods showing that our approach is often superior. We show empirically that the advantage of using the method proposed in this paper is even clearer when noise features are added. Finally, we have analyzed how our algorithm changes its performance with respect to the number of examples in the training set and the number of kernels combined.


ieee symposium series on computational intelligence | 2015

Multiple Graph-Kernel Learning

Fabio Aiolli; Michele Donini; Nicolò Navarin; Alessandro Sperduti

Kernels for structures, including graphs, generally suffer of the diagonally dominant gram matrix issue, the effect by which the number of sub-structures, or features, shared between instances are very few with respect to those shared by an instance with itself. A parametric rule is typically used to reduce the weights of largest (more complex) sub-structures. The particular rule which is adopted is in fact a strong external bias that may strongly affect the resulting predictive performance. Thus, in principle, the applied rule should be validated in addition to the other hyper-parameters of the kernel. Nevertheless, for the majority of graph kernels proposed in literature, the parameters of the weighting rule are fixed a priori. The contribution of this paper is two-fold. Firstly, we propose a Multiple Kernel Learning (MKL) approach to learn different weights of different bunches of features which are grouped by complexity. Secondly, we define a notion of kernel complexity, namely Kernel Spectral Complexity, and we show how this complexity relates to the well-known Empirical Rademacher Complexity for a natural class of functions which include SVM. The proposed approach is applied to a recently defined graph kernel and evaluated on several real-world datasets. The obtained results show that our approach outperforms the original kernel on all the considered tasks.


international symposium on neural networks | 2016

Distributed variance regularized Multitask Learning

Michele Donini; David Martínez-Rego; Martin Goodson; John Shawe-Taylor; Massimiliano Pontil

Past research on Multitask Learning (MTL) has focused mainly on devising adequate regularizers and less on their scalability. In this paper, we present a method to scale up MTL methods which penalize the variance of the task weight vectors. The method builds upon the alternating direction method of multipliers to decouple the variance regularizer. It can be efficiently implemented by a distributed algorithm, in which the tasks are first independently solved and subsequently corrected to pool information from other tasks. We show that the method works well in practice and convergences in few distributed iterations. Furthermore, we empirically observe that the number of iterations is nearly independent of the number of tasks, yielding a computational gain of O(T) over standard solvers. We also present experiments on a large URL classification dataset, which is challenging both in terms of volume of data points and dimensionality. Our results confirm that MTL can obtain superior performance over either learning a common model or independent task learning.


international conference on artificial neural networks | 2014

Learning Anisotropic RBF Kernels

Fabio Aiolli; Michele Donini

We present an approach for learning an anisotropic RBF kernel in a game theoretical setting where the value of the game is the degree of separation between positive and negative training examples. The method extends a previously proposed method (KOMD) to perform feature re-weighting and distance metric learning in a kernel-based classification setting. Experiments on several benchmark datasets demonstrate that our method generally outperforms state-of-the-art distance metric learning methods, including the Large Margin Nearest Neighbor Classification family of methods.


Archive | 2014

Stacked Models for Efficient Annotation of Brain Tissues in MR Volumes

Fabio Aiolli; Michele Donini; Enea Poletti; Enrico Grisan

Magnetic resonance imaging (MRI) allows the acquisition of high-resolution images of the brain. The diagnosis of various brain illnesses is supported by the distinguished analysis of the different kind of brain tissues, which imply their segmentation and classification. Brain MRI is organized in volumes composed by millions of voxels (at least 65.536 per slice, for at least 50 slices), hence the problem of labeling of brain tissue classes in the composition of atlases and ground truth references, which are needed for the training and the validation of machine-learning methods employed for brain segmentation. We propose a stacking classification scheme that does not require any other anatomical information to identify the 3 classes, gray matter (GM), white matter (WM) and Cerebro- Spinal Fluid (CSF). We employed two different MR sequences: fluid attenuated inversion recovery (FLAIR) and double inversion recovery (DIR). The former highlights both gray matter (GM) and white matter (WM), the latter highlights GM alone. Features are extracted using a local multi-scale texture analysis, computed for each pixel of the DIR and FLAIR sequences. The 9 textures considered are average, standard deviation, kurtosis, entropy, contrast, correlation, energy, homogeneity, and skewness, evaluated on a neighborhood of 3x3, 5x5, and 7x7 pixels. A stacked classifier is proposed exploiting the a priori knowledge about DIR and FLAIR features. Results highlight a significative improvement in classification performance with respect to using all the features in a state-of-the-art single classifier.


international conference on mobile and ubiquitous systems: networking and services | 2014

ClimbTheWorld: real-time stairstep counting to increase physical activity

Fabio Aiolli; Matteo Ciman; Michele Donini; Ombretta Gaggi


Archive | 2015

Feature and kernel learning

Michele Donini; Fabio Aiolli


the european symposium on artificial neural networks | 2014

Easy multiple kernel learning

Fabio Aiolli; Michele Donini


international conference on machine learning | 2017

Forward and Reverse Gradient-Based Hyperparameter Optimization

Luca Franceschi; Michele Donini; Paolo Frasconi; Massimiliano Pontil


the european symposium on artificial neural networks | 2016

Measuring the expressivity of graph kernels through the rademacher complexity

Luca Oneto; Nicolò Navarin; Michele Donini; Alessandro Sperduti; Fabio Aiolli; Davide Anguita

Collaboration


Dive into the Michele Donini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luca Franceschi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge