Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francisco Flórez-Revuelta is active.

Publication


Featured researches published by Francisco Flórez-Revuelta.


Expert Systems With Applications | 2012

A review on vision techniques applied to Human Behaviour Analysis for Ambient-Assisted Living

Alexandrous Andre Chaaraoui; Pau Climent-Pérez; Francisco Flórez-Revuelta

Human Behaviour Analysis (HBA) is more and more being of interest for computer vision and artificial intelligence researchers. Its main application areas, like Video Surveillance and Ambient-Assisted Living (AAL), have been in great demand in recent years. This paper provides a review on HBA for AAL and ageing in place purposes focusing specially on vision techniques. First, a clearly defined taxonomy is presented in order to classify the reviewed works, which are consequently presented following a bottom-up abstraction and complexity order. At the motion level, pose and gaze estimation as well as basic human movement recognition are covered. Next, the mainly used action and activity recognition approaches are presented with examples of recent research works. Increasing the degree of semantics and the time interval involved in the HBA, finally the behaviour level is reached. Furthermore, useful tools and datasets are analysed in order to provide help for initiating projects.


Pattern Recognition Letters | 2013

Silhouette-based human action recognition using sequences of key poses

Alexandros Andre Chaaraoui; Pau Climent-Pérez; Francisco Flórez-Revuelta

In this paper, a human action recognition method is presented in which pose representation is based on the contour points of the human silhouette and actions are learned by making use of sequences of multi-view key poses. Our contribution is twofold. Firstly, our approach achieves state-of-the-art success rates without compromising the speed of the recognition process and therefore showing suitability for online recognition and real-time scenarios. Secondly, dissimilarities among different actors performing the same action are handled by taking into account variations in shape (shifting the test data to the known domain of key poses) and speed (considering inconsistent time scales in the classification). Experimental results on the publicly available Weizmann, MuHAVi and IXMAS datasets return high and stable success rates, achieving, to the best of our knowledge, the best rate so far on the MuHAVi Novel Actor test.


Expert Systems With Applications | 2014

Evolutionary joint selection to improve human action recognition with RGB-D devices

Alexandros Andre Chaaraoui; José Ramón Padilla-López; Pau Climent-Pérez; Francisco Flórez-Revuelta

Interest in RGB-D devices is increasing due to their low price and the wide range of possible applications that come along. These devices provide a marker-less body pose estimation by means of skeletal data consisting of 3D positions of body joints. These can be further used for pose, gesture or action recognition. In this work, an evolutionary algorithm is used to determine the optimal subset of skeleton joints, taking into account the topological structure of the skeleton, in order to improve the final success rate. The proposed method has been validated using a state-of-the-art RGB action recognition approach, and applying it to the MSR-Action3D dataset. Results show that the proposed algorithm is able to significantly improve the initial recognition rate and to yield similar or better success rates than the state-of-the-art methods.


international conference on computer vision | 2013

Fusion of Skeletal and Silhouette-Based Features for Human Action Recognition with RGB-D Devices

Alexandros Andre Chaaraoui; José Ramón Padilla-López; Francisco Flórez-Revuelta

Since the Microsoft Kinect has been released, the usage of marker-less body pose estimation has been enormously eased. Based on 3D skeletal pose information, complex human gestures and actions can be recognised in real time. However, due to errors in tracking or occlusions, the obtained information can be noisy. Since the RGB-D data is available, the 3D or 2D shape of the person can be used instead. However, depending on the viewpoint and the action to recognise, it might present a low discriminative value. In this paper, the combination of body pose estimation and 2D shape, in order to provide additional characteristic value, is considered so as to improve human action recognition. Using efficient feature extraction techniques, skeletal and silhouette-based features are obtained which are low dimensional and can be obtained in real time. These two features are then combined by means of feature fusion. The proposed approach is validated using a state-of-the-art learning method and the MSR Action3D dataset as benchmark. The obtained results show that the fused feature achieves to improve the recognition rates, outperforming state-of-the-art results in recognition rate and robustness.


Sensors | 2014

A vision-based system for intelligent monitoring: human behaviour analysis and privacy by context.

Alexandros Andre Chaaraoui; José Ramón Padilla-López; Francisco Javier Ferrández-Pastor; Mario Nieto-Hidalgo; Francisco Flórez-Revuelta

Due to progress and demographic change, society is facing a crucial challenge related to increased life expectancy and a higher number of people in situations of dependency. As a consequence, there exists a significant demand for support systems for personal autonomy. This article outlines the vision@home project, whose goal is to extend independent living at home for elderly and impaired people, providing care and safety services by means of vision-based monitoring. Different kinds of ambient-assisted living services are supported, from the detection of home accidents, to telecare services. In this contribution, the specification of the system is presented, and novel contributions are made regarding human behaviour analysis and privacy protection. By means of a multi-view setup of cameras, peoples behaviour is recognised based on human action recognition. For this purpose, a weighted feature fusion scheme is proposed to learn from multiple views. In order to protect the right to privacy of the inhabitants when a remote connection occurs, a privacy-by-context method is proposed. The experimental results of the behaviour recognition method show an outstanding performance, as well as support for multi-view scenarios and real-time execution, which are required in order to provide the proposed services.


Sensors | 2016

From Data Acquisition to Data Fusion: A Comprehensive Review and a Roadmap for the Identification of Activities of Daily Living Using Mobile Devices

Ivan Miguel Pires; Nuno M. Garcia; Nuno Pombo; Francisco Flórez-Revuelta

This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs).


Sensors | 2016

Recognition of Activities of Daily Living with Egocentric Vision: A Review

Thi-Hoa-Cuc Nguyen; Jean-Christophe Nebel; Francisco Flórez-Revuelta

Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people. However, current systems based on cameras located in the environment present a number of problems, such as occlusions and a limited field of view. Recently, wearable cameras have begun to be exploited. This paper presents a review of the state of the art of egocentric vision systems for the recognition of ADLs following a hierarchical structure: motion, action and activity levels, where each level provides higher semantic information and involves a longer time frame. The current egocentric vision literature suggests that ADLs recognition is mainly driven by the objects present in the scene, especially those associated with specific tasks. However, although object-based approaches have proven popular, object recognition remains a challenge due to the intra-class variations found in unconstrained scenarios. As a consequence, the performance of current systems is far from satisfactory.


HBU'12 Proceedings of the Third international conference on Human Behavior Understanding | 2012

An efficient approach for multi-view human action recognition based on bag-of-key-poses

Alexandros Andre Chaaraoui; Pau Climent-Pérez; Francisco Flórez-Revuelta

This paper presents a novel multi-view human action recognition approach based on a bag-of-key-poses. In the case of multi-view scenarios, it is especially difficult to perform accurate action recognition that still runs at an admissible recognition speed. The presented method aims to fill this gap by combining a silhouette-based pose representation with a simple, yet effective multi-view learning approach based on Model Fusion. Action classification is performed through efficient sequence matching and by the comparison of successive key poses which are evaluated on both feature similarity and match relevance. Experimentation on the MuHAVi dataset shows that the method outperforms currently available recognition rates and is exceptionally robust to actor-variance. Temporal evaluation confirms the methods suitability for real-time recognition.


IEEE Transactions on Autonomous Mental Development | 2014

Adaptive Human Action Recognition With an Evolving Bag of Key Poses

Alexandros Andre Chaaraoui; Francisco Flórez-Revuelta

Vision-based human action recognition allows to detect and understand meaningful human motion. This makes it possible to perform advanced human-computer interaction, among other applications. In dynamic environments, adaptive methods are required to support changing scenario characteristics. Specifically, in human-robot interaction, smooth interaction between humans and robots can only be performed if these are able to evolve and adapt to the changing nature of the scenarios. In this paper, an adaptive vision-based human action recognition method is proposed. By means of an evolutionary optimization method, adaptive and incremental learning of human actions is supported. Through an evolving bag of key poses, which models the learned actions over time, the current learning memory is developed to recognize increasingly more actions or actors. The evolutionary method selects the optimal subset of training instances, features and parameter values for each learning phase, and handles the evolution of the model. The experimentation shows that our proposal achieves to adapt to new actions or actors successfully, by rearranging the learned model. Stable and accurate results have been obtained on four publicly available RGB and RGB-D datasets, unveiling the methods robustness and applicability.


Engineering Applications of Artificial Intelligence | 2014

Optimizing human action recognition based on a cooperative coevolutionary algorithm

Alexandros Andre Chaaraoui; Francisco Flórez-Revuelta

Abstract Vision-based human action recognition is an essential part of human behavior analysis, which is currently in great demand due to its wide area of possible applications. In this paper, an optimization of a human action recognition method based on a cooperative coevolutionary algorithm is proposed. By means of coevolution, three different populations are evolved to obtain the best performing individuals with respect to instance, feature and parameter selection. The fitness function is based on the result of the human action recognition method. Using a multi-view silhouette-based pose representation and a weighted feature fusion scheme, an efficient feature is obtained, which takes into account the multiple views and their relevance. Classification is performed by means of a bag of key poses, which represents the most characteristic pose representations, and matching of sequences of key poses. The performed experimentation indicates that not only a considerable performance gain is obtained outperforming the success rates of other state-of-the-art methods, but also the temporal and spatial performance of the algorithm is improved.

Collaboration


Dive into the Francisco Flórez-Revuelta's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Miguel Pires

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

Nuno M. Garcia

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

Nuno Pombo

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Susanna Spinsante

Marche Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge