Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandros Iosifidis is active.

Publication


Featured researches published by Alexandros Iosifidis.


IEEE Transactions on Neural Networks | 2012

View-Invariant Action Recognition Based on Artificial Neural Networks

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

In this paper, a novel view invariant action recognition method based on neural network representation and recognition is proposed. The novel representation of action videos is based on learning spatially related human body posture prototypes using self organizing maps. Fuzzy distances from human body posture prototypes are used to produce a time invariant action representation. Multilayer perceptrons are used for action classification. The algorithm is trained using data from a multi-camera setup. An arbitrary number of cameras can be used in order to recognize actions using a Bayesian framework. The proposed method can also be applied to videos depicting interactions between humans, without any modification. The use of information captured from different viewing angles leads to high classification performance. The proposed method is the first one that has been tested in challenging experimental setups, a fact that denotes its effectiveness to deal with most of the open issues in action recognition.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Minimum Class Variance Extreme Learning Machine for Human Action Recognition

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

In this paper, we propose a novel method aiming at view-independent human action recognition. Action description is based on local shape and motion information appearing at spatiotemporal locations of interest in a video. Action representation involves fuzzy vector quantization, while action classification is performed by a feedforward neural network. A novel classification algorithm, called minimum class variance extreme learning machine, is proposed in order to enhance the action classification performance. The proposed method can successfully operate in situations that may appear in real application scenarios, since it does not set any assumption concerning the visual scene background and the camera view angle. Experimental results on five publicly available databases, aiming at different application scenarios, denote the effectiveness of both the adopted action recognition approach and the proposed minimum class variance extreme learning machine algorithm.


Computer Vision and Image Understanding | 2012

Multi-view human movement recognition based on fuzzy distances and linear discriminant analysis

Alexandros Iosifidis; Anastasios Tefas; Nikolaos Nikolaidis; Ioannis Pitas

In this paper, a novel multi-view human movement recognition method is presented. A novel representation of multi-view human movement videos is proposed that is based on learning basic multi-view human movement primitives, called multi-view dynemes. The movement video is represented in a new feature space (called dyneme space) using these multi-view dynemes, thus producing a time invariant multi-view movement representation. Fuzzy distances from the multi-view dynemes are used to represent the human body postures in the dyneme space. Three variants of Linear Discriminant Analysis (LDA) are evaluated to achieve a discriminant movement representation in a low dimensionality space. The view identification problem is solved either by using a circular block shift procedure followed by the evaluation of the minimum Euclidean distance from any dyneme, or by exploiting the circular shift invariance property of the Discrete Fourier Transform (DFT). The discriminant movement representation combined with camera viewpoint identification and a nearest centroid classification step leads to a high human movement classification accuracy.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

Graph Embedded Extreme Learning Machine

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

In this paper, we propose a novel extension of the extreme learning machine (ELM) algorithm for single-hidden layer feedforward neural network training that is able to incorporate subspace learning (SL) criteria on the optimization process followed for the calculation of the networks output weights. The proposed graph embedded ELM (GEELM) algorithm is able to naturally exploit both intrinsic and penalty SL criteria that have been (or will be) designed under the graph embedding framework. In addition, we extend the proposed GEELM algorithm in order to be able to exploit SL criteria in arbitrary (even infinite) dimensional ELM spaces. We evaluate the proposed approach on eight standard classification problems and nine publicly available datasets designed for three problems related to human behavior analysis, i.e., the recognition of human face, facial expression, and activity. Experimental results denote the effectiveness of the proposed approach, since it outperforms other ELM-based classification schemes in all the cases.


Pattern Recognition Letters | 2015

On the kernel Extreme Learning Machine classifier

Alexandros Iosifidis; Anastastios Tefas; Ioannis Pitas

Abstract In this paper, we discuss the connection of the kernel versions of the ELM classifier with infinite Single-hidden Layer Feedforward Neural networks and show that the original ELM kernel definition can be adopted for the calculation of the ELM kernel matrix for two of the most common activation functions, i.e., the RBF and the sigmoid functions. In addition, we show that a low-rank decomposition of the kernel matrix defined on the input training data can be exploited in order to determine an appropriate ELM space for input data mapping. The ELM space determined from this process can be subsequently used for network training using the original ELM formulation. Experimental results denote that the adoption of the low-rank decomposition-based ELM space determination leads to enhanced performance, when compared to the standard choice, i.e., random input weights generation.


Pattern Recognition Letters | 2014

Discriminant Bag of Words based representation for human action recognition

Alexandros Iosifidis; Anastastios Tefas; Ioannis Pitas

Human action recognition based on Bag of Words representation.Discriminant codebook learning for better action class discrimination.Unified framework for the determination of both the optimized codebook and linear data projections. In this paper we propose a novel framework for human action recognition based on Bag of Words (BoWs) action representation, that unifies discriminative codebook generation and discriminant subspace learning. The proposed framework is able to, naturally, incorporate several (linear or non-linear) discrimination criteria for discriminant BoWs-based action representation. An iterative optimization scheme is proposed for sequential discriminant BoWs-based action representation and codebook adaptation based on action discrimination in a reduced dimensionality feature space where action classes are better discriminated. Experiments on five publicly available data sets aiming at different application scenarios demonstrate that the proposed unified approach increases the codebook discriminative ability providing enhanced action classification performance.


IEEE Transactions on Information Forensics and Security | 2012

Activity-Based Person Identification Using Fuzzy Representation and Discriminant Learning

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

In this paper, a novel view invariant person identification method based on human activity information is proposed. Unlike most methods proposed in the literature, in which “walk” (i.e., gait) is assumed to be the only activity exploited for person identification, we incorporate several activities in order to identify a person. A multicamera setup is used to capture the human body from different viewing angles. Fuzzy vector quantization and linear discriminant analysis are exploited in order to provide a discriminant activity representation. Person identification, activity recognition, and viewing angle specification results are obtained for all the available cameras independently. By properly combining these results, a view-invariant activity-independent person identification method is obtained. The proposed approach has been tested in challenging problem setups, simulating real application situations. Experimental results are very promising.


Neurocomputing | 2014

Regularized extreme learning machine for multi-view semi-supervised action recognition

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

Abstract In this paper, three novel classification algorithms aiming at (semi-)supervised action classification are proposed. Inspired by the effectiveness of discriminant subspace learning techniques and the fast and efficient Extreme Learning Machine (ELM) algorithm for Single-hidden Layer Feedforward Neural networks training, the ELM algorithm is extended by incorporating discrimination criteria in its optimization process, in order to enhance its classification performance. The proposed Discriminant ELM algorithm is extended, by incorporating proper regularization in its optimization process, in order to exploit information appearing in both labeled and unlabeled action instances. An iterative optimization scheme is proposed in order to address multi-view action classification. The proposed classification algorithms are evaluated on three publicly available action recognition databases providing state-of-the-art performance in all the cases.


Pattern Recognition Letters | 2013

Dynamic action recognition based on dynemes and Extreme Learning Machine

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

In this paper, we propose a novel method that performs dynamic action classification by exploiting the effectiveness of the Extreme Learning Machine (ELM) algorithm for single hidden layer feedforward neural networks training. It involves data grouping and ELM based data projection in multiple levels. Given a test action instance, a neural network is trained by using labeled action instances forming the groups that reside to the test samples neighborhood. The action instances involved in this procedure are, subsequently, mapped to a new feature space, determined by the trained network outputs. This procedure is performed multiple times, which are determined by the test action instance at hand, until only a single class is retained. Experimental results denote the effectiveness of the dynamic classification approach, compared to the static one, as well as the effectiveness of the ELM in the proposed dynamic classification setting.


Signal Processing | 2013

Multi-view action recognition based on action volumes, fuzzy distances and cluster discriminant analysis

Alexandros Iosifidis; Anastasios Tefas; Ioannis Pitas

In this paper, we present a view-independent action recognition method exploiting a low computational-cost volumetric action representation. Binary images depicting the human body during action execution are accumulated in order to produce the so-called action volumes. A novel time-invariant action representation is obtained by exploiting the circular shift invariance property of the magnitudes of the Discrete Fourier Transform coefficients. The similarity of an action volume with representative action volumes is exploited in order to map it to a lower-dimensional feature space that preserves the action class properties. Discriminant learning is, subsequently, employed for further dimensionality reduction and action class discrimination. By using such an action representation, the proposed approach performs fast action recognition. By combining action recognition results coming from different view angles, high recognition rates are obtained. The proposed method is extended to interaction recognition, i.e., to human action recognition involving two persons. The proposed approach is evaluated on a publicly available action recognition database using experimental settings simulating situations that may appear in real-life applications, as well as on a new nutrition support action recognition database.

Collaboration


Dive into the Alexandros Iosifidis's collaboration.

Top Co-Authors

Avatar

Anastasios Tefas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Ioannis Pitas

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Moncef Gabbouj

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Juho Kanniainen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Vasileios Mygdalis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dat Thanh Tran

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Nikolaos Passalis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Nikos Nikolaidis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Guanqun Cao

Tampere University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge