Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sílvio Filipe is active.

Publication


Featured researches published by Sílvio Filipe.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

The UBIRIS.v2: A Database of Visible Wavelength Iris Images Captured On-the-Move and At-a-Distance

Hugo Proença; Sílvio Filipe; R. S. Santos; João Oliveira; Luís A. Alexandre

The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.


Artificial Intelligence Review | 2015

RETRACTED ARTICLE: From the human visual system to the computational models of visual attention: a survey

Sílvio Filipe; Luís A. Alexandre

This article has been retracted by the authors. The article included text and ideas taken by the first author, without acknowledgement, from the following published article: “State-ofthe-art in visual attention modeling”, Ali Borji, Laurent Itti, IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1) (2013) 185–207, published online 05/04/12. Most notably: • In Sect. 3.1 (Biological plausible methods) the following paragraphs or sentences largely derive from the Borji and Itti article: “Rosenholtz (1999), Rosenholtz et al. (2004) designed a model ...”; “In Gu et al. (2005), a saliency map ... ”; “Le Meur et al. (2006) proposed ...”; “Kootstra et al. (2008) developed ...”; “Marat et al. (2009) proposed ...”; “Chikkerur et al. (2010) proposed ...”; and “Murray et al. (2011) introduced ...”. • In Sect. 3.2 (Computational methods) the following paragraphs or sentences largely derive from the Borji and Itti article: “Salah et al. (2002) proposed ...”; “Ramstrom and Christensen (2002) introduced ...”; “In Rao et al. (2002) and Rao (2005), they proposed ...”; “Jodogne andPiater (2007) introduced ...”; “Boccignone (2008) presented ...”; “Rosin (2009) proposed ...”; “Mahadevan and Vasconcelos (2010) presented ...”; and “Wang et al. (2011) introduced ...”. • In Sect. 3.3 (Hybrid methods) the following paragraphs or sentences largely derive from theBorji and Itti article: “Lee andYu (1999) proposed ...”; “Peters et al. (2005), Peters and Itti (2007a,b, 2008) trained ...”; “Weights between two nodes ...”; “The model consists of a nonlinear ...”; “Zhang et al. (2007, 2008) proposed ...”; “Pang et al. (2008) presented ...”; “Zhang et al. (2009) extended ...”; and “Li et al. (2010a) presented ...”. • Section 6 (Discussion) largely derives from, or summarizes ideas presented in, Sects. 2.1, 2.2, 2.4, 2.6 and 3.1–3.8 of the Borji and Itti article. The first author apologizes for his action.


IEEE Transactions on Image Processing | 2015

BIK-BUS: Biologically Motivated 3D Keypoint Based on Bottom-Up Saliency

Sílvio Filipe; Laurent Itti; Luís A. Alexandre

One of the major problems found when developing a 3D recognition system involves the choice of keypoint detector and descriptor. To help solve this problem, we present a new method for the detection of 3D keypoints on point clouds and we perform benchmarking between each pair of 3D keypoint detector and 3D descriptor to evaluate their performance on object and category recognition. These evaluations are done in a public database of real 3D objects. Our keypoint detector is inspired by the behavior and neural architecture of the primate visual system. The 3D keypoints are extracted based on a bottom-up 3D saliency map, that is, a map that encodes the saliency of objects in the visual environment. The saliency map is determined by computing conspicuity maps (a combination across different modalities) of the orientation, intensity, and color information in a bottom-up and in a purely stimulus-driven manner. These three conspicuity maps are fused into a 3D saliency map and, finally, the focus of attention (or keypoint location) is sequentially directed to the most salient points in this map. Inhibiting this location automatically allows the system to attend to the next most salient location. The main conclusions are: with a similar average number of keypoints, our 3D keypoint detector outperforms the other eight 3D keypoint detectors evaluated by achieving the best result in 32 of the evaluated metrics in the category and object recognition experiments, when the second best detector only obtained the best result in eight of these metrics. The unique drawback is the computational time, since biologically inspired 3D keypoint based on bottom-up saliency is slower than the other detectors. Given that there are big differences in terms of recognition performance, size and time requirements, the selection of the keypoint detector and descriptor has to be matched to the desired task and we give some directions to facilitate this choice.


international conference on image analysis and processing | 2015

Quis-Campi: Extending in the Wild Biometric Recognition to Surveillance Environments

João C. Neves; Gil Melfe Mateus Santos; Sílvio Filipe; Emanuel Grancho; Silvio Barra; Fabio Narducci; Hugo Proença

Efforts in biometrics are being held into extending robust recognition techniques to in the wild scenarios. Nonetheless, and despite being a very attractive goal, human identification in the surveillance context remains an open problem. In this paper, we introduce a novel biometric system – Quis-Campi – that effectively bridges the gap between surveillance and biometric recognition while having a minimum amount of operational restrictions. We propose a fully automated surveillance system for human recognition purposes, attained by combining human detection and tracking, further enhanced by a PTZ camera that delivers data with enough quality to perform biometric recognition. Along with the system concept, implementation details for both hardware and software modules are provided, as well as preliminary results over a real scenario.


iberoamerican congress on pattern recognition | 2010

Improving face segmentation in thermograms using image signatures

Sílvio Filipe; Luís A. Alexandre

The aim of this paper is to present a method for the automatic segmentation of face images captured in Long Wavelength Infrared (LWIR), allowing for a large range of face rotations and expressions. The motivation behind this effort is to enable better performance of face recognition methods in the thermal Infrared (IR) images. The proposed method consists on the modelling of background and face pixels by two normal distributions each, followed by a post-processing step of face dilation for closing holes and delimitation based on vertical and horizontal images signatures. Our experiments were performed on images of the University of Notre Dame (UND) and Florida State University (FSU) databases. The obtained results improve on previous existing methods from 2.8% to more than 25% depending on the method and database.


international conference on signal processing | 2008

Combining rectangular and triangular image regions to perform real-time face detection

Hugo Proença; Sílvio Filipe

Nowadays, face detection techniques assume growing relevance in a wide range of applications (e.g., biometrics and automatic surveillance)and constitute a pre-requisite of many image processing stages. Among a large number of published approaches, one of the most relevant is the method proposed by Viola and Jones to perform real-time face detection through a cascade schema of weak classifiers that act together to compose a strong and robust classifier. This method was the basis of our work and motivated the key contributions given in this paper. At first, based on the computer graphics concept of ldquotriangle meshrdquo we propose the notion of ldquotriangular integral featurerdquo to describe and model face properties. Also, we show results of our face detection experiments that point to an increase of the detection accuracy when the triangular features are mixed with the rectangular in the candidate feature set, which is considered an achievement. Also, it should be stressed that this optimization is obtained without any relevant increase in the computational requirements, either spatial or temporal, of the detection method.


iberian conference on pattern recognition and image analysis | 2013

Thermal Infrared Face Segmentation: A New Pose Invariant Method

Sílvio Filipe; Luís A. Alexandre

This paper presents a method for automatic segmentation of images of faces captured in (LWIR), allowing a wide range of face rotations, expressions and artifacts (such as glasses and hats). The paper presents a novel high accurate approach and compares its performance against three other previously published methods. The proposed approach is based on statistical modeling of pixel intensities and active contour application, although several other image processing operations are also performed. Experiments were performed on a total of 699 test images from three public available databases. The obtained results improve on previous existing methods up to 29.5% for the first measure error (E 1) and up to 34.7% for the second measure (E 2), depending on the method and database.


international symposium on visual computing | 2014

A Biological Motivated Multi-scale Keypoint Detector for local 3D Descriptors

Sílvio Filipe; Luís A. Alexandre

Most object recognition algorithms use a large number of descriptors extracted in a dense grid, so they have a very high computational cost, preventing real-time processing. The use of keypoint detectors allows the reduction of the processing time and the amount of redundancy in the data. Local descriptors extracted from images have been extensively reported in the computer vision literature. In this paper, we present a keypoint detector inspired by the behavior of the early visual system. Our method is a color extension of the BIMP keypoint detector, where we include both color and intensity channels of an image. The color information is included in a biological plausible way and reproduces the color information in the retina. Multi-scale image features are combined into a single keypoints map. Our detector is compared against state-of-the-art detectors and is particularly well-suited for tasks such as category and object recognition. The evaluation allowed us to obtain the best pair keypoint detector/descriptor on a RGB-D object dataset. Using our keypoint detector and the SHOTCOLOR descriptor we obtain a good category recognition rate and for object recognition it is with the PFHRGB descriptor that we obtain the best results.


2014 IEEE Symposium on Computational Intelligence for Multimedia, Signal and Vision Processing (CIMSIVP) | 2014

PFBIK-tracking: Particle filter with bio-inspired keypoints tracking

Sílvio Filipe; Luís A. Alexandre

In this paper, we propose a robust detection and tracking method for 3D objects by using keypoint information in a particle filter. Our method consists of three distinct steps: Segmentation, Tracking Initialization and Tracking. The segmentation is made in order to remove all the background information, in order to reduce the number of points for further processing. In the initialization, we use a keypoint detector with biological inspiration. The information of the object that we want to follow is given by the extracted keypoints. The particle filter does the tracking of the keypoints, so with that we can predict where the keypoints will be in the next frame. In a recognition system, one of the problems is the computational cost of keypoint detectors with this we intend to solve this problem. The experiments with PFBIK-Tracking method are done indoors in an office/home environment, where personal robots are expected to operate. The Tracking Error evaluate the stability of the general tracking method. We also quantitatively evaluate this method using a “Tracking Error”. Our evaluation is done by the computation of the keypoint and particle centroid. Comparing our system with the tracking method which exists in the Point Cloud Library, we archive better results, with a much smaller number of points and computational time. Our method is faster and more robust to occlusion when compared to the OpenniTracker.


Computer Vision Theory and Applications (VISAPP), 2014 International Conference on | 2015

A comparative evaluation of 3D keypoint detectors in a RGB-D Object Dataset

Sílvio Filipe; Luís A. Alexandre

Collaboration


Dive into the Sílvio Filipe's collaboration.

Top Co-Authors

Avatar

Luís A. Alexandre

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

Hugo Proença

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

Emanuel Grancho

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

João C. Neves

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

João Oliveira

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

R. S. Santos

University of Beira Interior

View shared research outputs
Top Co-Authors

Avatar

Laurent Itti

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge