Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomasz Maszczyk is active.

Publication


Featured researches published by Tomasz Maszczyk.


international conference on artificial intelligence and soft computing | 2006

Comparison of Shannon, Renyi and Tsallis Entropy Used in Decision Trees

Tomasz Maszczyk; Włodzisław Duch

Shannon entropy used in standard top-down decision trees does not guarantee the best generalization. Split criteria based on generalized entropies offer different compromise between purity of nodes and overall information gain. Modified C4.5 decision trees based on Tsallis and Renyi entropies have been tested on several high-dimensional microarray datasets with interesting results. This approach may be used in any decision tree and information selection algorithm.


international symposium on neural networks | 2012

Make it cheap: Learning with O(nd) complexity

Włodzisław Duch; Norbert Jankowski; Tomasz Maszczyk

Classification methods with linear computational complexity O(nd) in the number of samples n and their dimensionality d often give results that are better or at least statistically not significantly worse that slower algorithms. This is demonstrated here for many benchmark datasets downloaded from the UCI Machine Learning Repository. Results provided in this paper should be used as a reference for estimating usefulness of new learning algorithms: higher complexity methods should provide significantly better results to justify their use.


international conference on artificial neural networks | 2008

Support Vector Machines for Visualization and Dimensionality Reduction

Tomasz Maszczyk; Włodzisław Duch

Discriminant functions calculated by Support Vector Machines (SVMs) define in a computationally efficient way projections of high-dimensional data on a direction perpendicular to the discriminating hyperplane. These projections may be used to estimate and display posterior probability densities . Additional directions for visualization and dimensionality reduction are created by repeating the linear discrimination process in a space orthogonal to already defined projections. This process allows for an efficient reduction of dimensionality and visualization of data, at the same time improving classification accuracy of a single discriminant function. Visualization of real and artificial data shows that transformed data may not be separable and thus linear discrimination will completely fail, but the nearest neighbor or rule-based methods in the reduced space may still provide simple and accurate solutions.


international symposium on neural networks | 2010

Support Feature Machines: Support vectors are not enough

Tomasz Maszczyk; Włodzisław Duch

Support Vector Machines (SVMs) with various kernels have played dominant role in machine learning for many years, finding numerous applications. Although they have many attractive features interpretation of their solutions is quite difficult, the use of a single kernel type may not be appropriate in all areas of the input space, convergence problems for some kernels are not uncommon, the standard quadratic programming solution has O(m3) time and O(m2) space complexity for m training patterns. Kernel methods work because they implicitly provide new, useful features. Such features, derived from various kernels and other vector transformations, may be used directly in any machine learning algorithm, facilitating multiresolution, heterogeneous models of data. Therefore Support Feature Machines (SFM) based on linear models in the extended feature spaces, enabling control over selection of support features, give at least as good results as any kernel-based SVMs, removing all problems related to interpretation, scaling and convergence. This is demonstrated for a number of benchmark datasets analyzed with linear discrimination, SVM, decision trees and nearest neighbor methods.


Advances in Machine Learning II | 2010

Discovering Data Structures Using Meta-learning, Visualization and Constructive Neural Networks

Tomasz Maszczyk; Marek Grochowski; Włodzisław Duch

Several visualization methods have been used to reveal hidden data structures, facilitating discovery of simplest data models. Insights gained in this way are used to create constructive neural networks implementing appropriate transformations that provide simplest models of data. This is an efficient approach to meta-learning, guiding the search for best models in the space of all data transformations. It can solve problems with complex inherent logical structure that are very difficult for traditional machine learning algorithms.


international conference on neural information processing | 2009

Universal Learning Machines

Włodzisław Duch; Tomasz Maszczyk

All existing learning methods have particular bias that makes them suitable for specific kind of problems. Universal Learning Machine (ULM) should find the simplest data model for arbitrary data distributions. Several ways to create ULMs are outlined, and an algorithm based on creation of new global and local features combined with meta-learning is introduced. This algorithm is able to find simple solutions that sophisticated algorithms ignore, learn complex Boolean functions, complicated probability distributions, as well as the problems requiring multiresolution decision borders.


international conference on artificial neural networks | 2009

Almost Random Projection Machine

Włodzisław Duch; Tomasz Maszczyk

Backpropagation of errors is not only hard to justify from biological perspective but also it fails to solve problems requiring complex logic. A simpler algorithm based on generation and filtering of useful random projections has better biological justification, is faster, easier to train and may in practice solve non-separable problems of higher complexity than typical feedforward neural networks. Estimation of confidence in network decisions is done by visualization of the number of nodes that agree with the final decision.


international conference on digital signal processing | 2015

Real-time sociometrics from audio-visual features for two-person dialogs

Yasir Tahir; Debsubhra Chakraborty; Tomasz Maszczyk; Shoko Dauwels; Justin Dauwels; Nadia Magnenat Thalmann; Daniel Thalmann

This paper proposes a real time sociometric system to analyze social behavior from audio-visual recordings of two-person face-to-face conversations in English. The novelty of the proposed system lies in this automatic inference of ten social indicators in real time. The system comprises of a Microsoft kinect device that captures RGB and depth data to compute visual cues and microphones to capture speech cues from an on-going conversation. With these non-verbal cues as features, machine learning algorithms are implemented in the system to extract multiple indicators of social behavior including empathy, confusion and politeness. The system is trained and tested on two carefully annotated corpora that consist of two person dialogs. Based on leave-one-out cross-validation test, the accuracy range of developed algorithms to infer social behaviors is 50% - 86% for audio corpus, and 62% - 92% for audio-visual corpus.


international conference on artificial neural networks | 2010

Almost random projection machine with margin maximization and kernel features

Tomasz Maszczyk; Włodzisław Duch

Almost Random Projection Machine (aRPM) is based on generation and filtering of useful features by linear projections in the original feature space and in various kernel spaces. Projections may be either random or guided by some heuristics, in both cases followed by estimation of relevance of each generated feature. Final results are in the simplest case obtained using simple voting, but linear discrimination or any other machine approach may be used in the extended space of new features. New feature is added as a hidden node in a constructive network only if it increases the margin of classification, measured by the increase of the aggregated activity of nodes that agree with the final decision. Calculating margin more weight is put on vectors that are close to the decision threshold than on those classified with high confidence. Training is replaced by network construction, kernels that provide different resolution may be used at the same time, and difficult problems that require highly complex decision borders may be solved in a simple way. Relation of this approach to Support Vector Machines and Liquid State Machines is discussed.


international conference on neural information processing | 2012

Recursive similarity-based algorithm for deep learning

Tomasz Maszczyk; Włodzisław Duch

Recursive Similarity-Based Learning algorithm (RSBL) follows the deep learning idea, exploiting similarity-based methodology to recursively generate new features. Each transformation layer is generated separately, using as inputs information from all previous layers, and as new features similarity to the k nearest neighbors scaled using Gaussian kernels. In the feature space created in this way results of various types of classifiers, including linear discrimination and distance-based methods, are significantly improved. As an illustrative example a few non-trivial benchmark datasets from the UCI Machine Learning Repository are analyzed.

Collaboration


Dive into the Tomasz Maszczyk's collaboration.

Top Co-Authors

Avatar

Włodzisław Duch

Nicolaus Copernicus University in Toruń

View shared research outputs
Top Co-Authors

Avatar

Marek Grochowski

Nicolaus Copernicus University in Toruń

View shared research outputs
Top Co-Authors

Avatar

Debsubhra Chakraborty

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Justin Dauwels

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Nadia Magnenat Thalmann

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yasir Tahir

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Marcin Blachnik

Silesian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Norbert Jankowski

Nicolaus Copernicus University in Toruń

View shared research outputs
Top Co-Authors

Avatar

Jianmin Zheng

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Shoko Dauwels

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge