Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Miguel Bordallo López is active.

Publication


Featured researches published by Miguel Bordallo López.


Proceedings of SPIE | 2011

Accelerating image recognition on mobile devices using GPGPU

Miguel Bordallo López; Henri Nykänen; Jari Hannuksela; Olli Silvén; Markku Vehvilainen

The future multi-modal user interfaces of battery-powered mobile devices are expected to require computationally costly image analysis techniques. The use of Graphic Processing Units for computing is very well suited for parallel processing and the addition of programmable stages and high precision arithmetic provide for opportunities to implement energy-efficient complete algorithms. At the moment the first mobile graphics accelerators with programmable pipelines are available, enabling the GPGPU implementation of several image processing algorithms. In this context, we consider a face tracking approach that uses efficient gray-scale invariant texture features and boosting. The solution is based on the Local Binary Pattern (LBP) features and makes use of the GPU on the pre-processing and feature extraction phase. We have implemented a series of image processing techniques in the shader language of OpenGL ES 2.0, compiled them for a mobile graphics processing unit and performed tests on a mobile application processor platform (OMAP3530). In our contribution, we describe the challenges of designing on a mobile platform, present the performance achieved and provide measurement results for the actual power consumption in comparison to using the CPU (ARM) on the same platform.


Proceedings of SPIE | 2009

Graphics hardware accelerated panorama builder for mobile phones

Miguel Bordallo López; Jari Hannuksela; Olli Silvén; Markku Vehvilainen

Modern mobile communication devices frequently contain built-in cameras allowing users to capture highresolution still images, but at the same time the imaging applications are facing both usability and throughput bottlenecks. The difficulties in taking ad hoc pictures of printed paper documents with multi-megapixel cellular phone cameras on a common business use case, illustrate these problems for anyone. The result can be examined only after several seconds, and is often blurry, so a new picture is needed, although the view-finder image had looked good. The process can be a frustrating one with waits and the user not being able to predict the quality beforehand. The problems can be traced to the processor speed and camera resolution mismatch, and application interactivity demands. In this context we analyze building mosaic images of printed documents from frames selected from VGA resolution (640x480 pixel) video. High interactivity is achieved by providing real-time feedback on the quality, while simultaneously guiding the user actions. The graphics processing unit of the mobile device can be used to speed up the reconstruction computations. To demonstrate the viability of the concept, we present an interactive document scanning application implemented on a Nokia N95 mobile phone.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Comments on the “Kinship Face in the Wild” Data Sets

Miguel Bordallo López; Elhocine Boutellaa; Abdenour Hadid

The Kinship Face in the Wild data sets, recently published in TPAMI, are currently used as a benchmark for the evaluation of kinship verification algorithms. We recommend that these data sets are no longer used in kinship verification research unless there is a compelling reason that takes into account the nature of the images. We note that most of the image kinship pairs are cropped from the same photographs. Exploiting this cropping information, competitive but biased performance can be obtained using a simple scoring approach, taking only into account the nature of the image pairs rather than any features about kin information. To illustrate our motives, we provide classification results utilizing a simple scoring method based on the image similarity of both images of a kinship pair. Using simply the distance of the chrominance averages of the images in the Lab color space without any training or using any specific kin features, we achieve performance comparable to state-of-the-art methods. We provide the source code to prove the validity of our claims and ensure the repeatability of our experiments.


Journal of Real-time Image Processing | 2017

Evaluation of real-time LBP computing in multiple architectures

Miguel Bordallo López; Alejandro Nieto; Jani Boutellier; Jari Hannuksela; Olli Silvén

Local binary pattern (LBP) is a texture operator that is used in several different computer vision applications requiring, in many cases, real-time operation in multiple computing platforms. The irruption of new video standards has increased the typical resolutions and frame rates, which need considerable computational performance. Since LBP is essentially a pixel operator that scales with image size, typical straightforward implementations are usually insufficient to meet these requirements. To identify the solutions that maximize the performance of the real-time LBP extraction, we compare a series of different implementations in terms of computational performance and energy efficiency, while analyzing the different optimizations that can be made to reach real-time performance on multiple platforms and their different available computing resources. Our contribution addresses the extensive survey of LBP implementations in different platforms that can be found in the literature. To provide for a more complete evaluation, we have implemented the LBP algorithms in several platforms, such as graphics processing units, mobile processors and a hybrid programming model image coprocessor. We have extended the evaluation of some of the solutions that can be found in previous work. In addition, we publish the source code of our implementations.


Multimedia Tools and Applications | 2014

Interactive multi-frame reconstruction for mobile devices

Miguel Bordallo López; Jari Hannuksela; Olli Silvén; Markku Vehvilainen

The small size of handheld devices, their video capabilities and multiple cameras are under-exploited assets. Properly combined, the features can be used for creating novel applications that are ideal for pocket-sized devices, but may not be useful in laptop computers, such as interactively capturing and analyzing images on the fly. In this paper we consider building mosaic images of printed documents and natural scenes from low resolution video frames. High interactivity is provided by giving a real-time feedback on the video quality, while simultaneously guiding the user’s actions. In our contribution, we analyze and compare means to reach interactivity and performance with sensor signal processing and GPU assistance. The viability of the concept is demonstrated on a mobile phone. The achieved usability benefits suggest that combining interactive imaging and energy efficient high performance computing could enable new mobile applications and user interactions.


Proceedings of SPIE | 2011

Multimodal sensing-based camera applications

Miguel Bordallo López; Jari Hannuksela; J. Olli Silvén; Markku Vehvilainen

The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the users hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the devices accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.


international conference on image processing | 2014

Face and texture analysis using local descriptors: A comparative analysis

Abdenour Hadid; Juha Ylioinas; Miguel Bordallo López

In contrast to global image descriptors which compute features directly from the entire image, local descriptors representing the features in small local image patches have proved to be more effective in real world conditions. This paper considers three recent yet popular local descriptors, namely Local Binary Patterns (LBP), Local Phase Quantization (LPQ) and Binarized Statistical Image Features (BSIF), and provides extensive comparative analysis on two different research problems (gender and texture classification) using benchmark datasets. The three descriptors are analyzed in terms of both classification accuracy and computational costs. Furthermore, experiments on combining these descriptors are provided, pointing out useful insight into their complementarity.


international workshop on information forensics and security | 2016

On the usefulness of color for kinship verification from face images

Xiaoting Wu; Elhocine Boutellaa; Miguel Bordallo López; Xiaoyi Feng; Abdenour Hadid

Automatic kinship verification from faces aims to determine whether two persons have a biological kin relation or not by comparing their facial attributes. This is a challenging research problem that has recently received lots of attention from the research community. However, most of the proposed methods have mainly focused on analyzing only the luminance (i.e. gray-scale) of the face images, hence discarding the chrominance (i.e. color) information which can be a useful additional cue for verifying kin relationships. This paper investigates for the first time the usefulness of color information in the verification of kinship relationships from facial images. For this purpose, we extract joint color-texture features to encode both the luminance and the chrominance information in the color images. The kinship verification performance using joint color-texture analysis is then compared against counterpart approaches using only gray-scale information. Extensive experiments using different color spaces and texture features are conducted on two benchmark databases. Our results indicate that classifying color images consistently shows superior performance in three different color spaces.


international conference on biometrics | 2016

Kinship verification from videos using spatio-temporal texture features and deep learning

Elhocine Boutellaa; Miguel Bordallo López; Samy Ait-Aoudia; Xiaoyi Feng; Abdenour Hadid

Automatic kinship verification using facial images is a relatively new and challenging research problem in computer vision. It consists in automatically predicting whether two persons have a biological kin relation by examining their facial attributes. While most of the existing works extract shallow handcrafted features from still face images, we approach this problem from spatio-temporal point of view and explore the use of both shallow texture features and deep features for characterizing faces. Promising results, especially those of deep features, are obtained on the benchmark UvA-NEMO Smile database. Our extensive experiments also show the superiority of using videos over still images, hence pointing out the important role of facial dynamics in kinship verification. Furthermore, the fusion of the two types of features (i.e. shallow spatio-temporal texture features and deep features) shows significant performance improvements compared to state-of-the-art methods.


scandinavian conference on image analysis | 2017

Automatic Segmentation of Bone Tissue from Computed Tomography Using a Volumetric Local Binary Patterns Based Method

Jukka Kaipala; Miguel Bordallo López; Simo Saarakkala; Jérôme Thevenot

Segmentation of scanned tissue volumes of three-dimensional (3D) images often involves - at least partially - some manual process, as there is no standardized automatic method. A well-performing automatic segmentation would be preferable, not only because it would improve segmentation speed, but also because it would be user-independent and provide more objectivity to the task. Here we extend a 3D local binary patterns (LBP) based trabecular bone segmentation method with adaptive local thresholding and additional segmentation parameters to make it more robust yet still perform adequately when compared to traditional user-assisted segmentation. We estimate parameters for the new segmentation method (AMLM) in our experimental setting, and have two micro-computed tomography (µCT) scanned bovine trabecular bone tissue volumes segmented by both the AMLM and two experienced users. Comparison of the results shows superior performance of the AMLM.

Collaboration


Dive into the Miguel Bordallo López's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyi Feng

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alejandro Nieto

University of Santiago de Compostela

View shared research outputs
Top Co-Authors

Avatar

Samy Ait-Aoudia

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge