Federico Castanedo
Instituto de Salud Carlos III
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Federico Castanedo.
Information Fusion | 2010
Federico Castanedo; Jesús García; Miguel A. Patricio; José M. Molina
In this paper we present a Cooperative Surveillance Multi-Agent System (CS-MAS) architecture extended to incorporate dynamic coalition formation. We illustrate specific coalition formation using fusion skills. In this case, the fusion process is divided into two layers: (i) a global layer in the fusion center, which initializes the coalitions and (ii) a local layer within coalitions, where a local fusion agent is dynamically instantiated. There are several types of autonomous agent: surveillance-sensor agents, a fusion center agent, a local fusion agent, interface agents, record agents, planning agents, etc. Autonomous agents differ in their ability to carry out a specific surveillance task. A surveillance-sensor agent controls and manages individual sensors (usually video cameras). It has different capabilities depending on its functional complexity and limitations related to sensor-specific aspects. In the work presented here we add a new autonomous agent, called the local fusion agent, to the CS-MAS architecture, addressing specific problems of on-line sensor alignment, registration, bias removal and data fusion. The local fusion agent is dynamically created by the fusion center agent and involves several surveillance-sensor agents working in a coalition. We show how the inclusion of this new dynamic local fusion agent guarantees that, in a video-surveillance system, objects of interest are successfully tracked across the whole area, assuring continuity and seamless transitions.
Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks | 2006
Federico Castanedo; Miguel A. Patricio; Jesús García; José M. Molina
In this paper we describe the Cooperative Sensor Agents (CSA), a logical framework of autonomous agents working in sensor network environments. CSA is a two-layer framework. In the first layer, called Sensor Layer, each agent controls and manages individual sensors. Agents in Sensor Layer have different capabilities depending on their functional complexity and limitation related to specific sensor nature aspects. One agent may need to cooperate in order to achieve better and more accurate performance, or need additional capabilities that it does not have. This cooperation takes place doing a coalition formation in the Second Layer (Coalition Layer) of our framework. In this paper we have proposed the CSA framework architecture and its associated protocols for coalition management. The autonomous agents are modeled using BDI paradigm and they have control over their internal state. But cooperative problem solving occurs when a group of autonomous agents choose to work together to achieve a common goal or use additional capabilities and make a coalition. We present an experimentation of the Cooperative Sensor Agents in Surveillance Systems. In this environment, the agent perception is carried out by visual sensors and the coalition is made in order to use capabilities of other autonomous agents such as recording. Each vision agent is able to detect faces and the coalition formation extending global capabilities of surveillance system using cooperative strategies.
international conference on information fusion | 2007
Federico Castanedo; Miguel A. Patricio; Jesús García; José M. Molina
A surveillance system that fuses data from several data sources is more robust than those which depends on a single source of input. Fusing the information acquired by a vision system is a difficult task since the system needs to use reliable models for errors and take into account bad performance when taking measurements. In this research, we use a bidimensional object correspondence and tracking method based on the ground plane projection of the blob centroid. We propose a robust method that employs a two phase algorithm which uses a heuristic value and context information to automatically combine each source of information. The fusion process is carried out by a fusion agent in a multi-agent surveillance system. The experimental results on real video sequences have showed the effectiveness and robustness of the system.
Journal of Intelligent and Robotic Systems | 2011
Federico Castanedo; Jesús García; Miguel A. Patricio; José M. Molina
The newest surveillance applications is attempting more complex tasks such as the analysis of the behavior of individuals and crowds. These complex tasks may use a distributed visual sensor network in order to gain coverage and exploit the inherent redundancy of the overlapped field of views. This article, presents a Multi-agent architecture based on the Belief-Desire-Intention (BDI) model for processing the information and fusing the data in a distributed visual sensor network. Instead of exchanging raw images between the agents involved in the visual network, local signal processing is performed and only the key observed features are shared. After a registration or calibration phase, the proposed architecture performs tracking, data fusion and coordination. Using the proposed Multi-agent architecture, we focus on the means of fusing the estimated positions on the ground plane from different agents which are applied to the same object. This fusion process is used for two different purposes: (1) to obtain a continuity in the tracking along the field of view of the cameras involved in the distributed network, (2) to improve the quality of the tracking by means of data fusion techniques, and by discarding non reliable sensors. Experimental results on two different scenarios show that the designed architecture can successfully track an object even when occlusions or sensor’s errors take place. The sensor’s errors are reduced by exploiting the inherent redundancy of a visual sensor network with overlapped field of views.
international conference on distributed smart cameras | 2008
Federico Castanedo; Jesús García; Miguel A. Patricio; José M. Molina
One of the main characteristics of a visual sensor network environment is the high amount of data generated. In addition, the application of some process, as for example tracking objects, generate a highly noisy output which may potentially produce an inconsistent system output. By inconsistent output we mean highly differences between tracking information provided by the visual sensors. A visual sensor network, with overlapped field of views, could exploit the redundancy between the field of view of each visual sensor to avoid inconsistencies and obtain more accurate results. In this paper, we present a visual sensor network system with overlapped field of views, modeled as a network of software agents. The communication of each software agent allows the use of feedback information in the visual sensors, called active fusion. Results of the software architecture to support active fusion scheme in an indoor scenario evaluation are presented.
international work-conference on the interplay between natural and artificial computation | 2011
Federico Castanedo; Hamid K. Aghajan; Richard P. Kleihorst
This paper presents a novel way to perform probabilistic modeling of occupancy patterns from a sensor network. The approach is based on the Latent Dirichlet Allocation (LDA) model. The application of the LDA model is shown using a real dataset of occupancy logs from the sensor network of a modern office building. LDA is a generative and unsupervised probabilistic model for collections of discrete data. Continuous sequences of just binary sensor readings are segmented together in order to build the dataset discrete data (bag-of-words). Then, these bag-of-words are used to train the model with a fixed number of topics, also known as routines. Preliminary obtained results state that the LDA model successfully found latent topics over all rooms and therefore obtain the dominant occupancy patterns or routines on the sensor network.
practical applications of agents and multi-agent systems | 2009
Federico Castanedo; Jesús García; Miguel A. Patricio; José M. Molina
An intelligent Visual Sensor Network (VSN) should consist of autonomous visual sensors, which exchange information with each other and have reasoning capabilities. The information exchanged must be fused and delivered to the end user as one unit. In this paper, we investigate the use of the Multi-Agent paradigm to enhance the fusion process in a VSN. A key issue in a VSN is to determine which information to exchange between nodes, what data to fuse and what information to present to the final user. These issues are investigated and reported in this paper and the benefits of an agent based VSN are also presented. The aim of the paper is to report how the multi-agent architecture contributes to solving VSNs problems. A real prototype of an intelligent VSN using the Multi-Agent paradigm has been implemented with the objective to enhance the data fusion process.
advanced video and signal based surveillance | 2007
Federico Castanedo; Miguel A. Patricio; Jesús García; José M. Molina
In this paper an approach for multi-sensor coordination in a multiagent visual sensor network is presented. A belief-desire-intention model of multiagent systems is employed. In this multiagent system, the interactions between several surveillance-sensor agents and their respective fusion agent are discussed. The surveillance process is improved using a bottom-up/top-down coordination approach, in which a fusion agent controls the coordination process. In the bottom-up phase the information is sent to the fusion agent. On the other hand, in the top-down stage, feedback messages are sent to those surveillance-sensor agents that are performing an inconsistency tracking process with regard to the global fused tracking process. This feedback information allows to the surveillance-sensor agent to correct its tracking process. Finally, preliminary experiments with the PETS 2006 database are presented.
international conference on distributed smart cameras | 2013
Yasutaka Fukumoto; Federico Castanedo; Hamid K. Aghajan
In ambient intelligence environment, cameras serve as interface or monitoring devices for user activities. However, the use of cameras is often associated with privacy concerns and resource limitations. In order to alleviate these problems, anomaly detection by fusion of an accelerometer and a passive infrared (PIR) sensor can be employed to trigger the corresponding camera for further analysis only as the need arises. This paper discusses the combination of these off-the-shelf sensors to detect specific anomalous activities. In particular, an example of detecting an irregular user in the work environment is presented. We describe how to extract expressive features from both modalities and combine them to train a classifier on highly imbalanced datasets. The experimental results over real-life data show the effectiveness of our approach in detecting anomalous activities and therefore potentially reducing the use of cameras.
Archive | 2009
Federico Castanedo; Miguel A. Patricio; Jesús García; José M. Molina
In this chapter we describe Cooperative Surveillance Agents (CSAs), which is a logical framework of autonomous agents working in sensor network environments. CSAs is a two-layer framework. In the first layer, called Sensor Layer, each agent controls and manages individual sensors. Agents in the Sensor Layer have different capabilities depending on their functional complexity and limitation related to specific sensor nature aspects. One agent may need to cooperate in order to achieve better and more accurate performance, or need additional capabilities that it doesn’t have. This cooperation takes place doing a coalition formation in the second Layer (Coalition Layer) of our framework. In this chapter we have proposed a framework architecture of the CSAs and protocols for coalition management. The autonomous agents are modeled using BDI paradigm and they have control over their internal state. But cooperative problem solving occurs when a group of autonomous agents choose to work together to achieve a common goal and make a coalition. This emergent behavior of cooperation fits well with the multi agent paradigm. We present an experimentation of CSAs. In this environment, the agent perception is carried out by visual sensors and each agent is able to track pedestrians in their scenes. We show how coalition formation improves system accuracy by tracking people using cooperative fusion strategies.