Rodrigo Cilla
Charles III University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rodrigo Cilla.
Neurocomputing | 2012
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
This paper presents a distributed system for the recognition of human actions using views of the scene grabbed by different cameras. 2D frame descriptors are extracted for each available view to capture the variability in human motion. These descriptors are projected into a lower dimensional space and fed into a probabilistic classifier to output a posterior distribution of the action performed according to the descriptor computed at each camera. Classifier fusion algorithms are then used to merge the posterior distributions into a single distribution. The generated single posterior distribution is fed into a sequence classifier to make the final decision on the performed activity. The system can instantiate different algorithms for the different tasks, as the interfaces between modules are clearly defined. Results on the classification of the actions in the IXMAS dataset are reported. The accuracy of the proposed system is similar to state-of-the-art 3D methods, even though it uses only well-known 2D pattern recognition techniques and does not need to project the data into a 3D space or require camera calibration parameters.
Expert Systems | 2014
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
Employing multiple camera viewpoints in the recognition of human actions increases performance. This paper presents a feature fusion approach to efficiently combine 2D observations extracted from different camera viewpoints. Multiple-view dimensionality reduction is employed to learn a common parameterization of 2D action descriptors computed for each one of the available viewpoints. Canonical correlation analysis and their variants are employed to obtain such parameterizations. A sparse sequence classifier based on L1 regularization is proposed to avoid the problem of having to choose the proper number of dimensions of the common parameterization. The proposed system is employed in the classification of the Inria Xmas Motion Acquisition Sequences IXMAS data set with successful results.
Algorithms | 2009
Rodrigo Cilla; Miguel A. Patricio; Jesús Caja García; Antonio Berlanga; José Manuel Molina López
In this paper a method for selecting features for Human Activity Recognition from sensors is presented. Using a large feature set that contains features that may describe the activities to recognize, Best First Search and Genetic Algorithms are employed to select the feature subset that maximizes the accuracy of a Hidden Markov Model generated from the subset. A comparative of the proposed techniques is presented to demonstrate their performance building Hidden Markov Models to classify different human activities using video sensors.
hybrid artificial intelligence systems | 2010
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
This paper presents two different classifier fusion algorithms applied in the domain of Human Action Recognition from video A set of cameras observes a person performing an action from a predefined set For each camera view a 2D descriptor is computed and a posterior on the performed activity is obtained using a soft classifier These posteriors are combined using voting and a bayesian network to obtain a single belief measure to use for the final decision on the performed action Experiments are conducted with different low level frame descriptors on the IXMAS dataset, achieving results comparable to state of the art 3D proposals, but only performing 2D processing.
applied sciences on biomedical and communication technologies | 2009
Rodrigo Cilla; Miguel A. Patricio; Antonio Belanga; José M. Molina
Ambient Intelligence systems need to know what the users are doing. In this paper, An architecture for Human Activity Recognition using a Visual Sensor Network is proposed. The video sequence perceived by each camera is locally processed to obtain a local activity label. These activity labels are fused by an upper tier to obtain a global activity label. The activities recognized by the system are not specified a priori, they are discovered using automatic model selection techniques. Then, an expert has to label the discovered activities to give them a semantic meaning. Results of the application of the activity discovering procedure to a smart home dataset are shown.
international work-conference on the interplay between natural and artificial computation | 2011
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
This paper presents a feature fusion approach to the recognition of human actions from multiple cameras that avoids the computation of the 3D visual hull. Action descriptors are extracted for each one of the camera views available and projected into a common subspace that maximizes the correlation between each one of the components of the projections. That common subspace is learned using Probabilistic Canonical Correlation Analysis. The action classification is made in that subspace using a discriminative classifier. Results of the proposed method are shown for the classification of the IXMAS dataset.
distributed computing and artificial intelligence | 2009
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
In this paper, we address the problem of human activity classification from videos, giving a special emphasis to feature extraction and good feature selection. Due to the cut down in cameras cost that have been in the last years, these kind of systems are becoming popular for their wide application area. Taking a video blob tracker output, a feature extraction process is defined to extract an extensive feature set, that is filtered in a later step to select the best features present. Three different type of classifiers are trained with the result feature set and results are shown.
international work-conference on the interplay between natural and artificial computation | 2013
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
Human motion analysis methods have received increasing attention during the last two decades. In parallel, data fusion technologies have emerged as a powerful tool for the estimation of properties of objects in the real world. This papers presents a view of human motion analysis from the viewpoint of data fusion. JDL process model and Dasarathy’s input-output hierarchy are employed to categorize the works in the area. A survey of the literature in human motion analysis from multiple cameras is included. Future research directions in the area are identified after this review.
hybrid artificial intelligence systems | 2013
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
Sequence classification is an important problem in computer vision, speech analysis or computational biology. This paper presents a new training strategy for the Hidden Conditional Random Field sequence classifier incorporating model and feature selection. The standard Lasso regularization employed in the estimation of model parameters is replaced by overlapping group-L1 regularization. Depending on the configuration of the overlapping groups, model selection, feature selection,or both are performed. The sequence classifiers trained in this way have better predictive performance. The application of the proposed method in a human action recognition task confirms that fact.
hybrid artificial intelligence systems | 2011
Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina
This paper presents a human action recognition system that decomposes the task in two subtasks. First, a view-independent classifier, shared between the multiple views to analyze, is applied to obtain an initial guess of the posterior distribution of the performed action. Then, this posterior distribution is combined with view based knowledge to improve the action classification. This allows to reuse the view-independent component when a new view has to be analyzed, needing to only specify the view dependent knowledge. An example of the application of the system into an smart home domain is discussed.