Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miguel Farrajota is active.

Publication


Featured researches published by Miguel Farrajota.


international conference on computer vision systems | 2013

Biological models for active vision: towards a unified architecture

Kasim Terzić; David Lobato; Mário Saleiro; Jaime A. Martins; Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

Building a general-purpose, real-time active vision system completely based on biological models is a great challenge. We apply a number of biologically plausible algorithms which address different aspects of vision, such as edge and keypoint detection, feature extraction, optical flow and disparity, shape detection, object recognition and scene modelling into a complete system. We present some of the experiments from our ongoing work, where our system leverages a combination of algorithms to solve complex tasks.


international conference on universal access in human-computer interaction | 2016

A Deep Neural Network Video Framework for Monitoring Elderly Persons

Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

The rapidly increasing population of elderly persons is a phenomenon which affects almost the entire world. Although there are many telecare systems that can be used to monitor senior persons, none integrates one key requirement: detection of abnormal behavior related to chronic or new ailments. This paper presents a framework based on deep neural networks for detecting and tracking people in known environments, using one or more cameras. Video frames are fed into a convolutional network, and faces and upper/full bodies are detected in a single forward pass through the network. Persons are recognized and tracked by using a Siamese network which compares faces and/or bodies in previous frames with those in the current frame. This allows the system to monitor the persons in the environment. By taking advantage of parallel processing of ConvNets with GPUs, the system runs in real time on a NVIDIA Titan board, performing all above tasks simultaneously. This framework provides the basic infrastructure for future pose inference and gait tracking, in order to detect abnormal behavior and, if necessary, to trigger timely assistance by caregivers.


iberian conference on pattern recognition and image analysis | 2017

Human Pose Estimation by a Series of Residual Auto-Encoders

Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

Pose estimation is the task of predicting the pose of an object in an image or in a sequence of images. Here, we focus on articulated human pose estimation in scenes with a single person. We employ a series of residual auto-encoders to produce multiple predictions which are then combined to provide a heatmap prediction of body joints. In this network topology, features are processed across all scales which captures the various spatial relationships associated with the body. Repeated bottom-up and top-down processing with intermediate supervision for each auto-encoder network is applied. We propose some improvements to this type of regression-based networks to further increase performance, namely: (a) increase the number of parameters of the auto-encoder networks in the pipeline, (b) use stronger regularization along with heavy data augmentation, (c) use sub-pixel precision for more precise joint localization, and (d) combine all auto-encoders output heatmaps into a single prediction, which further increases body joint prediction accuracy. We demonstrate state-of-the-art results on the popular FLIC and LSP datasets.


international conference on universal access in human-computer interaction | 2015

Biologically Inspired Vision for Human-Robot Interaction

Mário Saleiro; Miguel Farrajota; Kasim Terzić; Sai Krishna; J. M. F. Rodrigues; J. M. Hans du Buf

Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.


Pattern Analysis and Applications | 2018

Human action recognition in videos with articulated pose information by deep networks

Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

Action recognition is of great importance in understanding human motion from video. It is an important topic in computer vision due to its many applications such as video surveillance, human–machine interaction and video retrieval. One key problem is to automatically recognize low-level actions and high-level activities of interest. This paper proposes a way to cope with low-level actions by combining information of human body joints to aid action recognition. This is achieved by using high-level features computed by a convolutional neural network which was pre-trained on Imagenet, with articulated body joints as low-level features. These features are then used to feed a Long Short-Term Memory network to learn the temporal dependencies of an action. For pose prediction, we focus on articulated relations between body joints. We employ a series of residual auto-encoders to produce multiple predictions which are then combined to provide a likelihood map of body joints. In the network topology, features are processed across all scales which capture the various spatial relationships associated with the body. Repeated bottom-up and top-down processing with intermediate supervision of each auto-encoder network is applied. We demonstrate state-of-the-art results on the popular FLIC, LSP and UCF Sports datasets.


Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion | 2016

Using Multi-Stage Features in Fast R-CNN for Pedestrian Detection

Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

Pedestrian detection and tracking remains popular issue in computer vision, with many applications in robotics, surveillance, security and telecare systems, especially when connected with Smart Cities and Smart Destinations. As a particular case of object detection, pedestrian detection in general is a difficult task due to a large variability of features due to different scales, views and occlusion. Typically, smaller and occluded pedestrians are hard to detect due to fewer discriminative features if compared to large-size, visible pedestrians. In order to overcome this, we use convolutional features from different stages in a deep Convolutional Neural Network (CNN), with the idea of combining more global features with finer details. In this paper we present an object detection framework based on multi-stage convolutional features for pedestrian detection. This framework extends the Fast R-CNN framework for the combination of several convolutional features from different stages of the used CNN to improve the networks detection accuracy. The Caltech Pedestrian dataset was used to train and evaluate the proposed method.


international conference on pattern recognition applications and methods | 2014

Region Segregation by Linking Keypoints Tuned to Colour

Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

Coloured regions can be segregated from each other by using colour-opponent mechanisms, colour contrast, saturation and luminance. Here we address segmentation by using end-stopped cells tuned to colour instead of to colour contrast. Colour information is coded in separate channels. By using multi-scale cortical endstopped cells tuned to colour, keypoint information in all channels is coded and mapped by multi-scale peaks. Unsupervised segmentation is achieved by analysing the branches of these peaks, which yields the best-fitting image regions.


International Journal of Digital Content Technology and Its Applications | 2011

The SmartVision local navigation aid for blind and visually impaired persons

João José; Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf


International Journal of Digital Content Technology and Its Applications | 2011

The SmartVision Navigation Prototype for Blind Users

J. M. H. du Buf; João Barroso; J. M. F. Rodrigues; Hugo Paredes; Miguel Farrajota; Hugo Fernandes; João José; Victor Teixeira; Mário Saleiro


international conference on bio-inspired systems and signal processing | 2011

Optical flow by multi-scale annotated keypoints: A biological approach

Miguel Farrajota; J. M. F. Rodrigues; J. M. H. du Buf

Collaboration


Dive into the Miguel Farrajota's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. M. H. du Buf

University of the Algarve

View shared research outputs
Top Co-Authors

Avatar

Mário Saleiro

University of the Algarve

View shared research outputs
Top Co-Authors

Avatar

João José

University of the Algarve

View shared research outputs
Top Co-Authors

Avatar

Kasim Terzić

University of the Algarve

View shared research outputs
Top Co-Authors

Avatar

Hugo Fernandes

University of Trás-os-Montes and Alto Douro

View shared research outputs
Top Co-Authors

Avatar

J. C. Martins

University of the Algarve

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

João Barroso

University of Trás-os-Montes and Alto Douro

View shared research outputs
Top Co-Authors

Avatar

Hugo Paredes

San Diego State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge