Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anton Akusok is active.

Publication


Featured researches published by Anton Akusok.


IEEE Access | 2015

High-Performance Extreme Learning Machines: A Complete Toolbox for Big Data Applications

Anton Akusok; Kaj-Mikael Björk; Yoan Miche; Amaury Lendasse

This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.


Neurocomputing | 2016

Extreme learning machine for missing data using multiple imputations

Dušan Sovilj; Emil Eirola; Yoan Miche; Kaj-Mikael Björk; Rui Nian; Anton Akusok; Amaury Lendasse

In the paper, we examine the general regression problem under the missing data scenario. In order to provide reliable estimates for the regression function (approximation), a novel methodology based on Gaussian Mixture Model and Extreme Learning Machine is developed. Gaussian Mixture Model is used to model the data distribution which is adapted to handle missing values, while Extreme Learning Machine enables to devise a multiple imputation strategy for final estimation. With multiple imputation and ensemble approach over many Extreme Learning Machines, final estimation is improved over the mean imputation performed only once to complete the data. The proposed methodology has longer running times compared to simple methods, but the overall increase in accuracy justifies this trade-off.


Cognitive Computation | 2014

A Two-Stage Methodology using K-NN and False Positive Minimizing ELM for Nominal Data Classification

Anton Akusok; Yoan Miche; Jozsef Hegedus; Rui Nian; Amaury Lendasse

This paper focuses on the problem of making decisions in the context of nominal data under specific constraints. The underlying goal driving the methodology proposed here is to build a decision-making model capable of classifying as many samples as possible while avoiding false positives at all costs, all within the smallest possible computational time. Under such constraints, one of the best type of model is the cognitive-inspired extreme learning machine (ELM), for the final decision process. A two-stage decision methodology using two types of classifiers, a distance-based one, K-NN, and the cognitive-based one, ELM, provides a fast means of obtaining a classification decision on a sample, keeping false positives as low as possible while classifying as many samples as possible (high coverage). The methodology only has two parameters, which, respectively, set the precision of the distance approximation and the final trade-off between false-positive rate and coverage. Experimental results using a specific dataset provided by F-Secure Corporation show that this methodology provides a rapid decision on new samples, with a direct control over the false positives and thus on the decision capabilities of the model.


IEEE Computational Intelligence Magazine | 2015

Arbitrary Category Classification of Websites Based on Image Content

Anton Akusok; Yoan Miche; Juha Karhunen; Kaj-Mikael Björk; Rui Nian; Amaury Lendasse

This paper presents a comprehensive methodology for general large-scale image-based classification tasks. It addresses the Big Data challenge in arbitrary image classification and more specifically, filtering of millions of websites with abstract target classes and high levels of label noise. Our approach uses local image features and their color descriptors to build image representations with the help of a modified k-NN algorithm. Image representations are refined into image and website class predictions by a two-stage classifier method suitable for a very large-scale real dataset. A modification of an Extreme Learning Machine is found to be a suitable classifier technique. The methodology is robust to noise and can learn abstract target categories; website classification accuracy surpasses 97% for the most important categories considered in this study.


international conference on artificial neural networks | 2013

Extreme learning machine: a robust modeling technique? yes!

Amaury Lendasse; Anton Akusok; Olli Simula; Francesco Corona; Mark van Heeswijk; Emil Eirola; Yoan Miche

In this paper is described the original (basic) Extreme Learning Machine (ELM). Properties like robustness and sensitivity to variable selection are studied. Several extensions of the original ELM are then presented and compared. Firstly, Tikhonov-Regularized Optimally-Pruned Extreme Learning Machine (TROP-ELM) is summarized as an improvement of the Optimally-Pruned Extreme Learning Machine (OP-ELM) in the form of a L2 regularization penalty applied within the OP-ELM. Secondly, a Methodology to Linearly Ensemble ELM (ELM-ELM) is presented in order to improve the performance of the original ELM. These methodologies (TROP-ELM and ELM-ELM) are tested against state of the art methods such as Support Vector Machines or Gaussian Processes and the original ELM and OP-ELM, on ten different data sets. A specific experiment to test the sensitivity of these methodologies to variable selection is also presented.


Neurocomputing | 2015

SOM-ELM-Self-Organized Clustering using ELM

Yoan Miche; Anton Akusok; David Veganzones; Kaj-Mikael Björk; Eric Séverin; Philippe du Jardin; Maite Termenon; Amaury Lendasse

This paper presents two new clustering techniques based on Extreme Learning Machine (ELM). These clustering techniques can incorporate a priori knowledge (of an expert) to define the optimal structure for the clusters, i.e. the number of points in each cluster. Using ELM, the first proposed clustering problem formulation can be rewritten as a Traveling Salesman Problem and solved by a heuristic optimization method. The second proposed clustering problem formulation includes both a priori knowledge and a self-organization based on a predefined map (or string). The clustering methods are successfully tested on 5 toy examples and 2 real datasets.


Neurocomputing | 2016

ELMVIS+: Fast nonlinear visualization technique based on cosine distance and extreme learning machines

Anton Akusok; Stephen Baek; Yoan Miche; Kaj-Mikael Björk; Rui Nian; Paula Lauren; Amaury Lendasse

Abstract This paper presents a fast algorithm and an accelerated toolbox 1 for data visualization. The visualization is stated as an assignment problem between data samples and the same number of given visualization points. The mapping function is approximated by an Extreme Learning Machine, which provides an error for a current assignment. This work presents a new mathematical formulation of the error function based on cosine similarity. It provides a closed form equation for a change of error for exchanging assignments between two random samples (called a swap), and an extreme speed-up over the original method even for a very large corpus like the MNIST Handwritten Digits dataset. The method starts from random assignment, and continues in a greedy optimization algorithm by randomly swapping pairs of samples, keeping the swaps that reduce the error. The toolbox speed reaches a million of swaps per second, and thousands of model updates per second for successful swaps in GPU implementation, even for very large dataset like MNIST Handwritten Digits.


international work-conference on artificial and natural neural networks | 2015

Extreme Learning Machines for Multiclass Classification: Refining Predictions with Gaussian Mixture Models

Emil Eirola; Andrey Gritsenko; Anton Akusok; Kaj-Mikael Björk; Yoan Miche; Dušan Sovilj; Rui Nian; Bo He; Amaury Lendasse

This paper presents an extension of the well-known Extreme Learning Machines (ELMs). The main goal is to provide probabilities as outputs for Multiclass Classification problems. Such information is more useful in practice than traditional crisp classification outputs. In summary, Gaussian Mixture Models are used as post-processing of ELMs. In that context, the proposed global methodology is keeping the advantages of ELMs (low computational time and state of the art performances) and the ability of Gaussian Mixture Models to deal with probabilities. The methodology is tested on 3 toy examples and 3 real datasets. As a result, the global performances of ELMs are slightly improved and the probability outputs are seen to be accurate and useful in practice.


Neurocomputing | 2015

MD-ELM

Anton Akusok; David Veganzones; Yoan Miche; Kaj-Mikael Björk; Philippe du Jardin; Eric Séverin; Amaury Lendasse

This paper proposes a methodology for identifying data samples that are likely to be mislabeled in a c-class classification problem (dataset). The methodology relies on an assumption that the generalization error of a model learned from the data decreases if a label of some mislabeled sample is changed to its correct class. A general classification model used in the paper is OP-ELM; it also provides a fast way to estimate the generalization error by PRESS Leave-One-Out. It is tested on two toy datasets, as well as on real life datasets for one of which expert knowledge about the identified potential mislabels has been sought.


international joint conference on neural network | 2016

Combined nonlinear visualization and classification: ELMVIS++C

Andrey Gritsenko; Anton Akusok; Yoan Miche; Kaj-Mikael Björk; Stephen Baek; Amaury Lendasse

This paper presents an improvement of the ELMVIS+ method that is proposed for fast nonlinear dimensionality reduction. The ELMVIS++C has an additional supervised learning component compared to ELMVIS+, which is originally an unsupervised method as like the majority of the other dimensionality reduction method. This component prevents samples under the same class being separated apart from each other. In this improved method, the importance of the supervised component can be further tuned to have different level of influence. The test results on four datasets indicate that the proposed improvement not only maintains the performance of ELMVIS+, but also is extremely beneficial for certain applications where the visualization of the data in relation with the class becomes an important issue.

Collaboration


Dive into the Anton Akusok's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kaj-Mikael Björk

Arcada University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rui Nian

Ocean University of China

View shared research outputs
Top Co-Authors

Avatar

Emil Eirola

Arcada University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge