Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francisco Barranco is active.

Publication


Featured researches published by Francisco Barranco.


IEEE Transactions on Very Large Scale Integration Systems | 2012

Parallel Architecture for Hierarchical Optical Flow Estimation Based on FPGA

Francisco Barranco; Matteo Tomasi; Javier Díaz; Mauricio Vanegas; Eduardo Ros

The proposed work presents a highly parallel architecture for motion estimation. Our system implements the well-known Lucas and Kanade algorithm with the multi-scale extension for the computation of large motion estimations in a dedicated device [field-programmable gate array (FPGA)]. Our system achieves 270 frames per second for a 640 × 480 resolution in the best case of the mono-scale implementation and 32 frames per second for the multi-scale one, fulfilling the requirements for a real-time system. We describe the system architecture, address the evaluation of the accuracy with well-known benchmark sequences (including a comparative study), and show the main hardware resources used.


systems man and cybernetics | 2009

Visual System Based on Artificial Retina for Motion Detection

Francisco Barranco; Javier Díaz; Eduardo Ros; B. del Pino

We present a bioinspired model for detecting spatiotemporal features based on artificial retina response models. Event-driven processing is implemented using four kinds of cells encoding image contrast and temporal information. We have evaluated how the accuracy of motion processing depends on local contrast by using a multiscale and rank-order coding scheme to select the most important cues from retinal inputs. We have also developed some alternatives by integrating temporal feature results and obtained a new improved bioinspired matching algorithm with high stability, low error and low cost. Finally, we define a dynamic and versatile multimodal attention operator with which the system is driven to focus on different target features such as motion, colors, and textures.


IEEE Transactions on Circuits and Systems for Video Technology | 2010

High-Performance Optical-Flow Architecture Based on a Multi-Scale, Multi-Orientation Phase-Based Model

Matteo Tomasi; Mauricio Vanegas; Francisco Barranco; Javier Díaz; Eduardo Ros

The accurate estimation of optical flow is a problem widely experienced in computer vision and researchers in this field are devoting their efforts to formulate reliable and robust algorithms for real life applications. These approaches need to be evaluated, especially in controlled scenarios. Because of their stability phase-based methods have generally been adopted in the various techniques developed to date, although it is still difficult to be sure of their viability in real-time systems due to their high requirements in terms of computational load. We describe here the implementation of a phase-based optical flow in a field-programmable gate array (FPGA) device. The system benefits from phase-information stability as well as sub-pixel accuracy without requiring additional computations and at the same time achieves high-performance computation by taking full advantage of the parallel processing resources of FPGA devices. Furthermore, the architecture extends the implementation to a multi-resolution and multi-orientation implementation, which enhances its accuracy and covers a wide range of detected velocities. Deep pipelined datapath architecture with superscalar computing units at different stages allows real-time processing beyond VGA image resolution. The final circuit is of significant complexity and useful for a wide range of fields requiring portable optical-flow processing engines.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Massive Parallel-Hardware Architecture for Multiscale Stereo, Optical Flow and Image-Structure Computation

Matteo Tomasi; Mauricio Vanegas; Francisco Barranco; Javier Daz; Eduardo Ros

Low-level vision tasks pose an outstanding challenge in terms of computational effort: pixel-wise operations require high-performance architectures to achieve real-time processing. Nowadays, diverse technologies permit a high level of parallelism and in this way researchers can address more and more complex on-chip low-level vision-feature extraction. In the state of the art, different architectures have been described that process single vision modes in real time but multiple computer vision modes are seldom conjointly computed on a single device to produce a general-purpose on-chip low-level vision system: this may be the basis for mid-level or high-level vision tasks. We present here a novel architecture for multiple-vision feature extraction that includes multiscale optical flow, disparity, energy, orientation, and phase. A high degree of robustness in real-life situations is obtained thanks to adopting phase-based models (at the cost of relatively high computing resource requirements). The high flexibility of the reconfigurable devices used allows for the exploration of different hardware configurations to deal with final target and user requirements. Making use of this novel architecture and hardware-sharing techniques we describe a co-processing board implementation as a case study. It reaches an outstanding computing power of 92.3 GigaOPS at very low power consumption (approximately 12.9 GigaOPS/W).


Proceedings of the IEEE | 2014

Contour Motion Estimation for Asynchronous Event-Driven Cameras

Francisco Barranco; Cornelia Fermüller; Yiannis Aloimonos

This paper compares image motion estimation with asynchronous event-based cameras to Computer Vision approaches using as input frame-based video sequences. Since dynamic events are triggered at significant intensity changes, which often are at the border of objects, we refer to the event-based image motion as “contour motion.” Algorithms are presented for the estimation of accurate contour motion from local spatio-temporal information for two camera models: the dynamic vision sensor (DVS), which asynchronously records temporal changes of the luminance, and a family of new sensors which combine DVS data with intensity signals. These algorithms take advantage of the high temporal resolution of the DVS and achieve robustness using a multiresolution scheme in time. It is shown that, because of the coupling of velocity and luminance information in the event distribution, the image motion estimation problem becomes much easier with the new sensors which provide both events and image intensity than with the DVS alone. Experiments on synthesized data from computer vision benchmarks show that our algorithm on combined data outperforms computer vision methods in accuracy and can achieve real-time performance, and experiments on real data confirm the feasibility of the approach. Given that current image motion (or so-called optic flow) methods cannot estimate well at object boundaries, the approach presented here could be used complementary to optic flow techniques, and can provide new avenues for computer vision motion research.


IEEE Transactions on Very Large Scale Integration Systems | 2012

Real-Time Architecture for a Robust Multi-Scale Stereo Engine on FPGA

Matteo Tomasi; Mauricio Vanegas; Francisco Barranco; Javier Díaz; Eduardo Ros

In this work, we present a real-time implementation of a stereo algorithm on field-programmable gate array (FPGA). The approach is a phase-based model that allows computation with sub-pixel accuracy. The algorithm uses a robust multi-scale and multi-orientation method that optimizes the estimation extraction with respect to the local image structure support. With respect to the state of the art, our work increases the on-chip power of computation compared to previous approaches in order to obtain a good accuracy of results with a large disparity range. In addition, our approach is specially suited for unconstrained environments applications thanks to the robustness of the phase information, capable of dealing with severe illumination changes and with small affine deformation between the image pair. This work also includes the rectification images circuitry in order to exploit the epipolar constraints on the chip. The dedicated circuit can rectify and process images of VGA resolution at a frame rate of 57 fps. The implementation uses a fine pipelined method (also with superscalar units) and multiple user defined parameters that lead to a high working frequency and a good adaptability to different scenarios. In the paper, we present different results and we compare them with state of the art approaches.


Frontiers in Neuroscience | 2016

A Dataset for Visual Navigation with Neuromorphic Methods.

Francisco Barranco; Cornelia Fermüller; Yiannis Aloimonos; Tobi Delbruck

Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS) and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.


Journal of Systems Architecture | 2010

Fine grain pipeline architecture for high performance phase-based optical flow computation

Matteo Tomasi; Francisco Barranco; Mauricio Vanegas; Javier Díaz; Eduardo Ros

Accurate motion analysis of real life sequences is a very active research field due to its multiple potential applications. Currently, new technologies offer us very fast and accurate sensors that provide a huge quantity of data per second. Processing these data streams is very expensive (in terms of computing power) for general purpose processors and therefore, is beyond processing capabilities of most current embedded devices. In this work, we present a specific hardware architecture that implements a robust optical flow algorithm able to process input video sequences at a high frame rate and high resolution, up to 160fps for VGA images. We describe a superpipelined datapath of more than 85 stages (some of them configured with superscalar units able to process several data in parallel). Therefore, we have designed an intensive parallel processing engine. System speed (frames per second) produces fine optical flow estimations (by constraining the actual motion ranges between consecutive frames) and the phase-based method confers the system robustness to image noise or illumination changes. In this work, we analyze the architecture of different frame rates and input image noise levels. We compare the results with other approaches in the state of the art and validate our implementation using several hardware platforms.


IEEE Transactions on Industrial Informatics | 2014

Real-Time Visual Saliency Architecture for FPGA With Top-Down Attention Modulation

Francisco Barranco; Javier Díaz; Begoña Pino; Eduardo Ros

Biological vision uses attention to reduce the visual bandwidth simplifying the higher-level processing. This paper presents a model and its hardware real-time architecture in a field programmable gate array (FPGA) to be integrated in a robotic system that emulates this powerful biological process. It is based on the combination of bottom-up saliency and top-down task-dependent modulation. The bottom-up stream is deployed including local energy, orientation, color opponencies, and motion maps. The most novel part of this work is the saliency modulation by two high-level features: 1) optical flow and 2) disparity. Furthermore, the influence of the features may be adjusted depending on the application. The proposed system reaches 180 fps for 640 × 480 resolution. Finally, an example shows its modulation potential for driving assistance systems.


International Journal of Computer Vision | 2018

Prediction of Manipulation Actions

Cornelia Fermüller; Fang Wang; Yezhou Yang; Konstantinos Zampogiannis; Yi Zhang; Francisco Barranco; Michael Pfeiffer

By looking at a person’s hands, one can often tell what the person is going to do next, how his/her hands are moving and where they will be, because an actor’s intentions shape his/her movement kinematics during action execution. Similarly, active systems with real-time constraints must not simply rely on passive video-segment classification, but they have to continuously update their estimates and predict future actions. In this paper, we study the prediction of dexterous actions. We recorded videos of subjects performing different manipulation actions on the same object, such as “squeezing”, “flipping”, “washing”, “wiping” and “scratching” with a sponge. In psychophysical experiments, we evaluated human observers’ skills in predicting actions from video sequences of different length, depicting the hand movement in the preparation and execution of actions before and after contact with the object. We then developed a recurrent neural network based method for action prediction using as input image patches around the hand. We also used the same formalism to predict the forces on the finger tips using for training synchronized video and force data streams. Evaluations on two new datasets show that our system closely matches human performance in the recognition task, and demonstrate the ability of our algorithms to predict in real time what and how a dexterous action is performed.

Collaboration


Dive into the Francisco Barranco's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge