Juan C. SanMiguel
Autonomous University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Juan C. SanMiguel.
advanced video and signal based surveillance | 2009
Álvaro Bayona; Juan C. SanMiguel; José M. Martínez
In several video surveillance applications, such as the detection of abandoned/stolen objects or parked vehicles,the detection of stationary foreground objects is a critical task. In the literature, many algorithms have been proposed that deal with the detection of stationary foreground objects, the majority of them based on background subtraction techniques. In this paper we discuss various stationary object detection approaches comparing them in typical surveillance scenarios (extracted from standard datasets). Firstly, the existing approaches based on background-subtraction are organized into categories. Then, a representative technique of each category is selected and described. Finally, a comparative evaluation using objective and subjective criteria is performed on video surveillance sequences selected from the PETS 2006 and i-LIDS forAVSS 2007 datasets, analyzing the advantages and drawbacks of each selected approach.
advanced video and signal based surveillance | 2009
Juan C. SanMiguel; José M. Martínez; Álvaro García
In this paper, we propose an ontology for representing the prior knowledge related to video event analysis. It is composed of two types of knowledge related to the application domain and the analysis system. Domain knowledge involves all the high level semantic concepts in the context of each examined domain (objects, events, context...) whilst system knowledge involves the capabilities of the analysis system (algorithms, reactions to events...). The proposed ontology has been structured in two parts: the basic ontology (composed of the basic concepts and their specializations) and the domain-specific extensions. Additionally, a video analysis framework based on the proposed ontology is defined for the analysis of different application domains showing the potential use of the proposed ontology. In order to show the real applicability of the proposed ontology, it is specialized for the Underground video-surveillance domain showing some results that demonstrate the usability and effectiveness of the proposed ontology.
IEEE Computer | 2014
Juan C. SanMiguel; Christian Micheloni; Karen Shoop; Gian Luca Foresti; Andrea Cavallaro
Camera networks that reconfigure while performing multiple tasks have unique requirements, such as concurrent task allocation with limited resources, the sharing of data among fields of view across the network, and coordination among heterogeneous devices.
international conference on image processing | 2010
Álvaro Bayona; Juan C. SanMiguel; José M. Martínez
In this paper we describe a new algorithm focused on obtaining stationary foreground regions, which is useful for applications like the detection of abandoned/stolen objects and parked vehicles. Firstly, a sub-sampling scheme based on background subtraction techniques is implemented to obtain stationary foreground regions. Secondly, some modifications are introduced on this base algorithm with the purpose of reducing the amount of stationary foreground detected. Finally, we evaluate the proposed algorithm and compare results with the base algorithm using video surveillance sequences from PETS 2006, PETS 2007 and I-LIDS for AVSS 2007 datasets. Experimental results show that the proposed algorithm increases the detection of stationary foreground regions as compared to the base algorithm.
IEEE Transactions on Image Processing | 2012
Juan C. SanMiguel; Andrea Cavallaro; José M. Martínez
We propose an adaptive framework to estimate the quality of video tracking algorithms without ground-truth data. The framework is divided into two main stages, namely, the estimation of the tracker condition to identify temporal segments during which a target is lost and the measurement of the quality of the estimated track when the tracker is successful. A key novelty of the proposed framework is the capability of evaluating video trackers with multiple failures and recoveries over long sequences. Successful tracking is identified by analyzing the uncertainty of the tracker, whereas track recovery from errors is determined based on the time-reversibility constraint. The proposed approach is demonstrated on a particle filter tracker over a heterogeneous data set. Experimental results show the effectiveness and robustness of the proposed framework that improves state-of-the-art approaches in the presence of tracking challenges such as occlusions, illumination changes, and clutter and on sequences containing multiple tracking errors and recoveries.
Computer Vision and Image Understanding | 2016
Diego Ortego; Juan C. SanMiguel; José M. Martínez
We propose a new temporal-spatial block-based Background estimation approach to compute a foreground-free image for video sequences.Threshold-free clustering is proposed to discover similar blocks over time which contain the background data.An iterative spatial reconstruction selects blocks to obtain the final background.The performance improvement is validated in two datasets (36 sequences) using 13 state-of-the-art algorithms. Background estimation in video consists in extracting a foreground-free image from a set of training frames. Moving and stationary objects may affect the background visibility, thus invalidating the assumption of many related literature where background is the temporal dominant data. In this paper, we present a temporal-spatial block-level approach for background estimation in video to cope with moving and stationary objects. First, a Temporal Analysis module obtains a compact representation of the training data by motion filtering and dimensionality reduction. Then, a threshold-free hierarchical clustering determines a set of candidates to represent the background for each spatial location (block). Second, a Spatial Analysis module iteratively reconstructs the background using these candidates. For each spatial location, multiple reconstruction hypotheses (paths) are explored to obtain its neighboring locations by enforcing inter-block similarities and intra-block homogeneity constraints in terms of color discontinuity, color dissimilarity and variability. The experimental results show that the proposed approach outperforms the related state-of-the-art over challenging video sequences in presence of moving and stationary objects.
Computer Vision and Image Understanding | 2012
Juan C. SanMiguel; José M. Martínez
This paper presents an approach for real-time video event recognition that combines the accuracy and descriptive capabilities of, respectively, probabilistic and semantic approaches. Based on a state-of-art knowledge representation, we define a methodology for building recognition strategies from event descriptions that consider the uncertainty of the low-level analysis. Then, we efficiently organize such strategies for performing the recognition according to the temporal characteristics of events. In particular, we use Bayesian Networks and probabilistically-extended Petri Nets for recognizing, respectively, simple and complex events. For demonstrating the proposed approach, a framework has been implemented for recognizing human-object interactions in the video monitoring domain. The experimental results show that our approach improves the event recognition performance as compared to the widely used deterministic approach.
advanced video and signal based surveillance | 2011
Luis Alberto Caro Campos; Juan C. SanMiguel; José M. Martínez
In this paper we propose an approach based on active contours to discriminate previously detected static foreground regions between abandoned and stolen. Firstly, the static foreground object contour is extracted. Then, an active contour adjustment is performed on the current and the background frames. Finally, similarities between the initial contour and the two adjustments are studied to decide whether the object is abandoned or stolen. Three different methods have been tested for this adjustment. Experimental results over a heterogeneous dataset show that the proposed method outperforms state-of-art approaches and provides a robust solution against non-accurate data (i.e., foreground static objects wrongly segmented) that is common in complex scenarios.
IEEE Sensors Journal | 2015
Juan C. SanMiguel; Andrea Cavallaro
We propose an approach to create camera coalitions in resource-constrained camera networks and demonstrate it for collaborative target tracking. We cast coalition formation as a decentralized resource allocation process where the best cameras among those viewing a target are assigned to a coalition based on marginal utility theory. A manager is dynamically selected to negotiate with cameras whether they will join the coalition and to coordinate the tracking task. This negotiation is based not only on the utility brought by each camera to the coalition, but also on the associated cost (i.e. additional processing and communication). Experimental results and comparisons using simulations and real data show that the proposed approach outperforms related state-of-the-art methods by improving tracking accuracy in cost-free settings. Moreover, under resource limitations, the proposed approach controls the tradeoff between accuracy and cost, and achieves energy savings with only a minor reduction in accuracy.
advanced video and signal based surveillance | 2010
Juan C. SanMiguel; José M. Martínez
In video-surveillance systems, the moving objectsegmentation stage (commonly based on backgroundsubtraction) has to deal with several issues like noise,shadows and multimodal backgrounds. Hence, its failureis inevitable and its automatic evaluation is a desirablerequirement for online analysis. In this paper, we proposea hierarchy of existing performance measures not-basedon ground-truth for video object segmentation. Then, fourmeasures based on color and motion are selected andexamined in detail with different segmentation algorithmsand standard test sequences for video objectsegmentation. Experimental results show that color-basedmeasures perform better than motion-based measures andbackground multimodality heavily reduces the accuracy ofall obtained evaluation results