Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandros Andre Chaaraoui is active.

Publication


Featured researches published by Alexandros Andre Chaaraoui.


Pattern Recognition Letters | 2013

Silhouette-based human action recognition using sequences of key poses

Alexandros Andre Chaaraoui; Pau Climent-Pérez; Francisco Flórez-Revuelta

In this paper, a human action recognition method is presented in which pose representation is based on the contour points of the human silhouette and actions are learned by making use of sequences of multi-view key poses. Our contribution is twofold. Firstly, our approach achieves state-of-the-art success rates without compromising the speed of the recognition process and therefore showing suitability for online recognition and real-time scenarios. Secondly, dissimilarities among different actors performing the same action are handled by taking into account variations in shape (shifting the test data to the known domain of key poses) and speed (considering inconsistent time scales in the classification). Experimental results on the publicly available Weizmann, MuHAVi and IXMAS datasets return high and stable success rates, achieving, to the best of our knowledge, the best rate so far on the MuHAVi Novel Actor test.


Expert Systems With Applications | 2014

Evolutionary joint selection to improve human action recognition with RGB-D devices

Alexandros Andre Chaaraoui; José Ramón Padilla-López; Pau Climent-Pérez; Francisco Flórez-Revuelta

Interest in RGB-D devices is increasing due to their low price and the wide range of possible applications that come along. These devices provide a marker-less body pose estimation by means of skeletal data consisting of 3D positions of body joints. These can be further used for pose, gesture or action recognition. In this work, an evolutionary algorithm is used to determine the optimal subset of skeleton joints, taking into account the topological structure of the skeleton, in order to improve the final success rate. The proposed method has been validated using a state-of-the-art RGB action recognition approach, and applying it to the MSR-Action3D dataset. Results show that the proposed algorithm is able to significantly improve the initial recognition rate and to yield similar or better success rates than the state-of-the-art methods.


international conference on computer vision | 2013

Fusion of Skeletal and Silhouette-Based Features for Human Action Recognition with RGB-D Devices

Alexandros Andre Chaaraoui; José Ramón Padilla-López; Francisco Flórez-Revuelta

Since the Microsoft Kinect has been released, the usage of marker-less body pose estimation has been enormously eased. Based on 3D skeletal pose information, complex human gestures and actions can be recognised in real time. However, due to errors in tracking or occlusions, the obtained information can be noisy. Since the RGB-D data is available, the 3D or 2D shape of the person can be used instead. However, depending on the viewpoint and the action to recognise, it might present a low discriminative value. In this paper, the combination of body pose estimation and 2D shape, in order to provide additional characteristic value, is considered so as to improve human action recognition. Using efficient feature extraction techniques, skeletal and silhouette-based features are obtained which are low dimensional and can be obtained in real time. These two features are then combined by means of feature fusion. The proposed approach is validated using a state-of-the-art learning method and the MSR Action3D dataset as benchmark. The obtained results show that the fused feature achieves to improve the recognition rates, outperforming state-of-the-art results in recognition rate and robustness.


Sensors | 2014

A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

Alexandros Andre Chaaraoui; José Ramón Padilla-López; Francisco Javier Ferrández-Pastor; Mario Nieto-Hidalgo; Francisco Flórez-Revuelta

Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, peoples behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.


HBU'12 Proceedings of the Third international conference on Human Behavior Understanding | 2012

An efficient approach for multi-view human action recognition based on bag-of-key-poses

Alexandros Andre Chaaraoui; Pau Climent-Pérez; Francisco Flórez-Revuelta

This paper presents a novel multi-view human action recognition approach based on a bag-of-key-poses. In the case of multi-view scenarios, it is especially difficult to perform accurate action recognition that still runs at an admissible recognition speed. The presented method aims to fill this gap by combining a silhouette-based pose representation with a simple, yet effective multi-view learning approach based on Model Fusion. Action classification is performed through efficient sequence matching and by the comparison of successive key poses which are evaluated on both feature similarity and match relevance. Experimentation on the MuHAVi dataset shows that the method outperforms currently available recognition rates and is exceptionally robust to actor-variance. Temporal evaluation confirms the methods suitability for real-time recognition.


IEEE Transactions on Autonomous Mental Development | 2014

Adaptive Human Action Recognition With an Evolving Bag of Key Poses

Alexandros Andre Chaaraoui; Francisco Flórez-Revuelta

Vision-based human action recognition allows to detect and understand meaningful human motion. This makes it possible to perform advanced human-computer interaction, among other applications. In dynamic environments, adaptive methods are required to support changing scenario characteristics. Specifically, in human-robot interaction, smooth interaction between humans and robots can only be performed if these are able to evolve and adapt to the changing nature of the scenarios. In this paper, an adaptive vision-based human action recognition method is proposed. By means of an evolutionary optimization method, adaptive and incremental learning of human actions is supported. Through an evolving bag of key poses, which models the learned actions over time, the current learning memory is developed to recognize increasingly more actions or actors. The evolutionary method selects the optimal subset of training instances, features and parameter values for each learning phase, and handles the evolution of the model. The experimentation shows that our proposal achieves to adapt to new actions or actors successfully, by rearranging the learned model. Stable and accurate results have been obtained on four publicly available RGB and RGB-D datasets, unveiling the methods robustness and applicability.


Engineering Applications of Artificial Intelligence | 2014

Optimizing human action recognition based on a cooperative coevolutionary algorithm

Alexandros Andre Chaaraoui; Francisco Flórez-Revuelta

Abstract Vision-based human action recognition is an essential part of human behavior analysis, which is currently in great demand due to its wide area of possible applications. In this paper, an optimization of a human action recognition method based on a cooperative coevolutionary algorithm is proposed. By means of coevolution, three different populations are evolved to obtain the best performing individuals with respect to instance, feature and parameter selection. The fitness function is based on the result of the human action recognition method. Using a multi-view silhouette-based pose representation and a weighted feature fusion scheme, an efficient feature is obtained, which takes into account the multiple views and their relevance. Classification is performed by means of a bag of key poses, which represents the most characteristic pose representations, and matching of sequences of key poses. The performed experimentation indicates that not only a considerable performance gain is obtained outperforming the success rates of other state-of-the-art methods, but also the temporal and spatial performance of the algorithm is improved.


Sensors | 2015

Visual Privacy by Context: Proposal and Evaluation of a Level-Based Visualisation Scheme

José Ramón Padilla-López; Alexandros Andre Chaaraoui; Feng Gu; Francisco Flórez-Revuelta

Privacy in image and video data has become an important subject since cameras are being installed in an increasing number of public and private spaces. Specifically, in assisted living, intelligent monitoring based on computer vision can allow one to provide risk detection and support services that increase peoples autonomy at home. In the present work, a level-based visualisation scheme is proposed to provide visual privacy when human intervention is necessary, such as at telerehabilitation and safety assessment applications. Visualisation levels are dynamically selected based on the previously modelled context. In this way, different levels of protection can be provided, maintaining the necessary intelligibility required for the applications. Furthermore, a case study of a living room, where a top-view camera is installed, is presented. Finally, the performed survey-based evaluation indicates the degree of protection provided by the different visualisation models, as well as the personal privacy preferences and valuations of the users.


genetic and evolutionary computation conference | 2013

Human action recognition optimization based on evolutionary feature subset selection

Alexandros Andre Chaaraoui; Francisco Flórez-Revuelta

Human action recognition constitutes a core component of advanced human behavior analysis. The detection and recognition of basic human motion enables to analyze and understand human activities, and to react proactively providing different kinds of services from human-computer interaction to health care assistance. In this paper, a feature-level optimization for human action recognition is proposed. The resulting recognition rate and computational cost are significantly improved by means of a low-dimensional radial summary feature and evolutionary feature subset selection. The introduced feature is computed using only the contour points of human silhouettes. These are spatially aligned based on a radial scheme. This definition shows to be proficient for feature subset selection, since different parts of the human body can be selected by choosing the appropriate feature elements. The best selection is sought using a genetic algorithm. Experimentation has been performed on the publicly available MuHAVi dataset. Promising results are shown, since state-of-the-art recognition rates are considerably outperformed with a highly reduced computational cost.


International Scholarly Research Notices | 2014

A Low-Dimensional Radial Silhouette-Based Feature for Fast Human Action Recognition Fusing Multiple Views

Alexandros Andre Chaaraoui; Francisco Flórez-Revuelta

This paper presents a novel silhouette-based feature for vision-based human action recognition, which relies on the contour of the silhouette and a radial scheme. Its low-dimensionality and ease of extraction result in an outstanding proficiency for real-time scenarios. This feature is used in a learning algorithm that by means of model fusion of multiple camera streams builds a bag of key poses, which serves as a dictionary of known poses and allows converting the training sequences into sequences of key poses. These are used in order to perform action recognition by means of a sequence matching algorithm. Experimentation on three different datasets returns high and stable recognition rates. To the best of our knowledge, this paper presents the highest results so far on the MuHAVi-MAS dataset. Real-time suitability is given, since the method easily performs above video frequency. Therefore, the related requirements that applications as ambient-assisted living services impose are successfully fulfilled.

Collaboration


Dive into the Alexandros Andre Chaaraoui's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Kampel

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rainer Planinc

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge