Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miguel A. Patricio is active.

Publication


Featured researches published by Miguel A. Patricio.


EURASIP Journal on Advances in Signal Processing | 2007

Multi-agent framework in visual sensor networks

Miguel A. Patricio; Javier Carbó; Óscar Pérez; Jesús García; José M. Molina

The recent interest in the surveillance of public, military, and commercial scenarios is increasing the need to develop and deploy intelligent and/or automated distributed visual surveillance systems. Many applications based on distributed resources use the so-called software agent technology. In this paper, a multi-agent framework is applied to coordinate videocamera-based surveillance. The ability to coordinate agents improves the global image and task distribution efficiency. In our proposal, a software agent is embedded in each camera and controls the capture parameters. Then coordination is based on the exchange of high-level messages among agents. Agents use an internal symbolic model to interpret the current situation from the messages from all other agents to improve global coordination.


Expert Systems With Applications | 2011

Ontology-based context representation and reasoning for object tracking and scene interpretation in video

Juan Gómez-Romero; Miguel A. Patricio; Jesús García; José M. Molina

Research highlights? We have developed a general framework for Computer Vision systems. ? Perceived and contextual knowledge is represented with ontologies. ? Rule-based reasoning is applied to achieve scene interpretation and vision enhancement. ? The framework can be extended and applied in different application domains. Computer vision research has been traditionally focused on the development of quantitative techniques to calculate the properties and relations of the entities appearing in a video sequence. Most object tracking methods are based on statistical methods, which often result inadequate to process complex scenarios. Recently, new techniques based on the exploitation of contextual information have been proposed to overcome the problems that these classical approaches do not solve. The present paper is a contribution in this direction: we propose a Computer Vision framework aimed at the construction of a symbolic model of the scene by integrating tracking data and contextual information. The scene model, represented with formal ontologies, supports the execution of reasoning procedures in order to: (i) obtain a high-level interpretation of the scenario; (ii) provide feedback to the low-level tracking procedure to improve its accuracy and performance. The paper describes the layered architecture of the framework and the structure of the knowledge model, which have been designed in compliance with the JDL model for Information Fusion. We also explain how deductive and abductive reasoning is performed within the model to accomplish scene interpretation and tracking improvement. To show the advantages of our approach, we develop an example of the use of the framework in a video-surveillance application.


ubiquitous computing | 2012

Context-based scene recognition from visual data in smart homes: an Information Fusion approach

Juan Gómez-Romero; Miguel A. Serrano; Miguel A. Patricio; Jesús García; José M. Molina

Ambient Intelligence (AmI) aims at the development of computational systems that process data acquired by sensors embedded in the environment to support users in everyday tasks. Visual sensors, however, have been scarcely used in this kind of applications, even though they provide very valuable information about scene objects: position, speed, color, texture, etc. In this paper, we propose a cognitive framework for the implementation of AmI applications based on visual sensor networks. The framework, inspired by the Information Fusion paradigm, combines a priori context knowledge represented with ontologies with real time single camera data to support logic-based high-level local interpretation of the current situation. In addition, the system is able to automatically generate feedback recommendations to adjust data acquisition procedures. Information about recognized situations is eventually collected by a central node to obtain an overall description of the scene and consequently trigger AmI services. We show the extensible and adaptable nature of the approach with a prototype system in a smart home scenario.


Information Fusion | 2010

Data fusion to improve trajectory tracking in a Cooperative Surveillance Multi-Agent Architecture

Federico Castanedo; Jesús García; Miguel A. Patricio; José M. Molina

In this paper we present a Cooperative Surveillance Multi-Agent System (CS-MAS) architecture extended to incorporate dynamic coalition formation. We illustrate specific coalition formation using fusion skills. In this case, the fusion process is divided into two layers: (i) a global layer in the fusion center, which initializes the coalitions and (ii) a local layer within coalitions, where a local fusion agent is dynamically instantiated. There are several types of autonomous agent: surveillance-sensor agents, a fusion center agent, a local fusion agent, interface agents, record agents, planning agents, etc. Autonomous agents differ in their ability to carry out a specific surveillance task. A surveillance-sensor agent controls and manages individual sensors (usually video cameras). It has different capabilities depending on its functional complexity and limitations related to sensor-specific aspects. In the work presented here we add a new autonomous agent, called the local fusion agent, to the CS-MAS architecture, addressing specific problems of on-line sensor alignment, registration, bias removal and data fusion. The local fusion agent is dynamically created by the fusion center agent and involves several surveillance-sensor agents working in a coalition. We show how the inclusion of this new dynamic local fusion agent guarantees that, in a video-surveillance system, objects of interest are successfully tracked across the whole area, assuring continuity and seamless transitions.


international conference on information fusion | 2010

Strategies and techniques for use and exploitation of Contextual Information in high-level fusion architectures

Juan Gómez-Romero; Jesús García; Michael Kandefer; James Llinas; José M. Molina; Miguel A. Patricio; Michael Prentice; Stuart C. Shapiro

Contextual Information is proving to be not only an additional exploitable information source for improving entity and situational estimates in certain Information Fusion systems, but can also be the entire focus of estimation for such systems as those directed to Ambient Intelligence (AI) and Context-Aware(CA) applications. This paper will discuss the role(s) of Contextual Information (CI) in a wide variety of IF applications to include AI, CA, Defense, and Cyber-security among possible others, the issues involved in designing strategies and techniques for CI use and exploitation, provide some exemplars of evolving CI use/exploitation designs on our current projects, and describe some general frameworks that are evolving in various application domains where CI is proving critical.


Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks | 2006

Extending surveillance systems capabilities using BDI cooperative sensor agents

Federico Castanedo; Miguel A. Patricio; Jesús García; José M. Molina

In this paper we describe the Cooperative Sensor Agents (CSA), a logical framework of autonomous agents working in sensor network environments. CSA is a two-layer framework. In the first layer, called Sensor Layer, each agent controls and manages individual sensors. Agents in Sensor Layer have different capabilities depending on their functional complexity and limitation related to specific sensor nature aspects. One agent may need to cooperate in order to achieve better and more accurate performance, or need additional capabilities that it does not have. This cooperation takes place doing a coalition formation in the Second Layer (Coalition Layer) of our framework. In this paper we have proposed the CSA framework architecture and its associated protocols for coalition management. The autonomous agents are modeled using BDI paradigm and they have control over their internal state. But cooperative problem solving occurs when a group of autonomous agents choose to work together to achieve a common goal or use additional capabilities and make a coalition. We present an experimentation of the Cooperative Sensor Agents in Surveillance Systems. In this environment, the agent perception is carried out by visual sensors and the coalition is made in order to use capabilities of other autonomous agents such as recording. Each vision agent is able to detect faces and the coalition formation extending global capabilities of surveillance system using cooperative strategies.


Neurocomputing | 2012

A probabilistic, discriminative and distributed system for the recognition of human actions from multiple views

Rodrigo Cilla; Miguel A. Patricio; Antonio Berlanga; José M. Molina

This paper presents a distributed system for the recognition of human actions using views of the scene grabbed by different cameras. 2D frame descriptors are extracted for each available view to capture the variability in human motion. These descriptors are projected into a lower dimensional space and fed into a probabilistic classifier to output a posterior distribution of the action performed according to the descriptor computed at each camera. Classifier fusion algorithms are then used to merge the posterior distributions into a single distribution. The generated single posterior distribution is fed into a sequence classifier to make the final decision on the performed activity. The system can instantiate different algorithms for the different tasks, as the interfaces between modules are clearly defined. Results on the classification of the actions in the IXMAS dataset are reported. The accuracy of the proposed system is similar to state-of-the-art 3D methods, even though it uses only well-known 2D pattern recognition techniques and does not need to project the data into a 3D space or require camera calibration parameters.


Expert Systems With Applications | 2009

A Context Model and Reasoning System to improve object trackingin complex scenarios

Ana M. Sánchez; Miguel A. Patricio; Jesús García; José M. Molina

Tracking algorithms in computer vision usually fail when dealing with complex scenarios. This paper presents an extension of a general tracking system that uses context knowledge to solve tracking issues. The context layer represents knowledge about the context of the analyzed scenario and applies rules to reason with it, in order to assess the general tracking layer at different stages and enhance tracking results. The context knowledge representation and the reasoning methods are general and can be easily adapted to different scenarios. The experimentation results show that the performance of the tracking system is considerably improved, while the efficiency requirements that are mandatory in real-time systems are satisfied.


Proceedings of the 2007 EvoWorkshops 2007 on EvoCoMnet, EvoFIN, EvoIASP,EvoINTERACTION, EvoMUSART, EvoSTOC and EvoTransLog: Applications of Evolutionary Computing | 2009

Comparison Between Genetic Algorithms and the Baum-Welch Algorithm in Learning HMMs for Human Activity Classification

Óscar Pérez; Massimo Piccardi; Jesús García; Miguel A. Patricio; José M. Molina

A Hidden Markov Model (HMM) is used as an efficient and robust technique for human activities classification. The HMM evaluates a set of video recordings to classify each scene as a function of the future, actual and previous scenes. The probabilities of transition between states of the HMM and the observation model should be adjusted in order to obtain a correct classification. In this work, these matrixes are estimated using the well known Baum-Welch algorithm that is based on the definition of the real observations as a mixture of two Gaussians for each state. The application of the GA follows the same principle but the optimization is carried out considering the classification. In this case, GA optimizes the Gaussian parameters considering as a fitness function the results of the classification application. Results show the improvement of GA techniques for human activities recognition.


Lecture Notes in Computer Science | 2006

Improving the segmentation stage of a pedestrian tracking video-based system by means of evolution strategies

Óscar Pérez; Miguel A. Patricio; Jesús García; José M. Molina

Pedestrian tracking video-based systems present particular problems such as the multi fragmentation or low level of compactness of the resultant blobs due to the human shape or movements. This paper shows how to improve the segmentation stage of a video surveillance system by adding morphological post-processing operations so that the subsequent blocks increase their performance. The adjustment of the parameters that regulate the new morphological processes is tuned by means of Evolution Strategies. Finally, the paper proposes a group of metrics to assess the global performance of the surveillance system. After the evaluation over a high number of video sequences, the results show that the shape of the tracks match up more accurately with the parts of interests. Thus, the improvement of segmentation stage facilitates the subsequent stages so that global performance of the surveillance system increases.

Collaboration


Dive into the Miguel A. Patricio's collaboration.

Top Co-Authors

Avatar

Juan Gómez-Romero

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Federico Castanedo

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Darío Maravall

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Alberto Pozo

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar

Jesús Gracía

Instituto de Salud Carlos III

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iván Dotú

Autonomous University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Javier de Lope

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Angel Arroyo

Technical University of Madrid

View shared research outputs
Researchain Logo
Decentralizing Knowledge