Raquel Lazcano
Technical University of Madrid
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raquel Lazcano.
Journal of Systems Architecture | 2017
Raquel Lazcano; Daniel Madroñal; Rubén Salvador; Karol Desnos; Maxime Pelcat; Raúl Guerra; Himar Fabelo; Samuel Ortega; Sebastián López; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
This paper presents a study of the parallelism of a Principal Component Analysis (PCA) algorithm and its adaptation to a manycore MPPA (Massively Parallel Processor Array) architecture, which gathers 256 cores distributed among 16 clusters. This study focuses on porting hyperspectral image processing into many core platforms by optimizing their processing to fulfill real-time constraints, fixed by the image capture rate of the hyperspectral sensor. Real-time is a challenging objective for hyperspectral image processing, as hyperspectral images consist of extremely large volumes of data and this problem is often solved by reducing image size before starting the processing itself. To tackle the challenge, this paper proposes an analysis of the intrinsic parallelism of the different stages of the PCA algorithm with the objective of exploiting the parallelization possibilities offered by an MPPA manycore architecture. Furthermore, the impact on internal communication when increasing the level of parallelism, is also analyzed. Experimenting with medical images obtained from two different surgical use cases, an average speedup of 20 is achieved. Internal communications are shown to rapidly become the bottleneck that reduces the achievable speedup offered by the PCA parallelization. As a result of this study, PCA processing time is reduced to less than 6 s, a time compatible with the targeted brain surgery application requiring 1 frame-per-minute.
Sensors | 2018
Himar Fabelo; Samuel Ortega; Raquel Lazcano; Daniel Madroñal; Gustavo Marrero Callicó; Eduardo Juárez; Rubén Salvador; Diederik Bulters; Harry Bulstrode; Adam Szolna; Juan F. Piñeiro; Coralia Sosa; Aruma J. O’Shanahan; Sara Bisshopp; María Jose Hernández; Jesús Morera; Daniele Ravi; Bangalore Ravi Kiran; A. Vega; Abelardo Báez-Quevedo; Guang-Zhong Yang; Bogdan Stanciulescu; Roberto Sarmiento
Hyperspectral imaging (HSI) allows for the acquisition of large numbers of spectral bands throughout the electromagnetic spectrum (within and beyond the visual range) with respect to the surface of scenes captured by sensors. Using this information and a set of complex classification algorithms, it is possible to determine which material or substance is located in each pixel. The work presented in this paper aims to exploit the characteristics of HSI to develop a demonstrator capable of delineating tumor tissue from brain tissue during neurosurgical operations. Improved delineation of tumor boundaries is expected to improve the results of surgery. The developed demonstrator is composed of two hyperspectral cameras covering a spectral range of 400–1700 nm. Furthermore, a hardware accelerator connected to a control unit is used to speed up the hyperspectral brain cancer detection algorithm to achieve processing during the time of surgery. A labeled dataset comprised of more than 300,000 spectral signatures is used as the training dataset for the supervised stage of the classification algorithm. In this preliminary study, thematic maps obtained from a validation database of seven hyperspectral images of in vivo brain tissue captured and processed during neurosurgical operations demonstrate that the system is able to discriminate between normal and tumor tissue in the brain. The results can be provided during the surgical procedure (~1 min), making it a practical system for neurosurgeons to use in the near future to improve excision and potentially improve patient outcomes.
PLOS ONE | 2018
Himar Fabelo; Samuel Ortega; Daniele Ravi; B. Ravi Kiran; Coralia Sosa; Diederik Bulters; Gustavo Marrero Callicó; Harry Bulstrode; Adam Szolna; Juan F. Piñeiro; Silvester Kabwama; Daniel Madroñal; Raquel Lazcano; Aruma J. O’Shanahan; Sara Bisshopp; Maria del C. Valdés Hernández; Abelardo Báez; Guang-Zhong Yang; Bogdan Stanciulescu; Rubén Salvador; Eduardo Juárez; Roberto Sarmiento
Surgery for brain cancer is a major problem in neurosurgery. The diffuse infiltration into the surrounding normal brain by these tumors makes their accurate identification by the naked eye difficult. Since surgery is the common treatment for brain cancer, an accurate radical resection of the tumor leads to improved survival rates for patients. However, the identification of the tumor boundaries during surgery is challenging. Hyperspectral imaging is a non-contact, non-ionizing and non-invasive technique suitable for medical diagnosis. This study presents the development of a novel classification method taking into account the spatial and spectral characteristics of the hyperspectral images to help neurosurgeons to accurately determine the tumor boundaries in surgical-time during the resection, avoiding excessive excision of normal tissue or unintentionally leaving residual tumor. The algorithm proposed in this study to approach an efficient solution consists of a hybrid framework that combines both supervised and unsupervised machine learning methods. Firstly, a supervised pixel-wise classification using a Support Vector Machine classifier is performed. The generated classification map is spatially homogenized using a one-band representation of the HS cube, employing the Fixed Reference t-Stochastic Neighbors Embedding dimensional reduction algorithm, and performing a K-Nearest Neighbors filtering. The information generated by the supervised stage is combined with a segmentation map obtained via unsupervised clustering employing a Hierarchical K-Means algorithm. The fusion is performed using a majority voting approach that associates each cluster with a certain class. To evaluate the proposed approach, five hyperspectral images of surface of the brain affected by glioblastoma tumor in vivo from five different patients have been used. The final classification maps obtained have been analyzed and validated by specialists. These preliminary results are promising, obtaining an accurate delineation of the tumor area.
conference on design and architectures for signal and image processing | 2016
Rubén Salvador; Himar Fabelo; Raquel Lazcano; Samuel Ortega; Daniel Madroñal; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
In this paper, a demonstrator of three different elements of the EU FET HELICoiD project is introduced. The goal of this demonstration is to show how the combination of hyperspectral imaging and machine learning can be a potential solution to precise real-time detection of tumor tissues during surgical operations. The HELICoiD setup consists of two hyperspectral cameras, a scanning unit, an illumination system, a data processing system and an EMB01 accelerator platform, which hosts an MPPA-256 manycore chip. All the components are mounted fulfilling restrictions from surgical environments, as shown in the accompanying video recorded at the operating room. An in-vivo human brain hyperspectral image data base, obtained at the University Hospital Doctor Negrin in Las Palmas de Gran Canaria, has been employed as input to different supervised classification algorithms (SVM, RF, NN) and to a spatial-spectral filtering stage (SVM-KNN). The resulting classification maps are shown in this demo. In addition, the implementation of the SVM-KNN classification algorithm on the MPPA EMB01 platform is demonstrated in the live demo.
conference on design and architectures for signal and image processing | 2017
Raquel Lazcano; Daniel Madroñal; Himar Fabelo; Samuel Ortega; Rubén Salvador; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
This paper presents a study of the par alle lization possibilities of a Non-Linear Iterative Partial Least Squares algorithm and its adaptation to a Massively Parallel Processor Array manycore architecture, which assembles 256 cores distributed over 16 clusters. The aim of this work is twofold: first, to test the behavior of iterative, complex algorithms in a manycore architecture; and, secondly, to achieve real-time processing of hyperspectral images, which is fixed by the image capture rate of the hyperspectral sensor. Real-time is a challenging objective, as hyperspectral images are composed of extensive volumes of spectral information. This issue is usually addressed by reducing the image size prior to the processing phase itself. Consequently, this paper proposes an analysis of the intrinsic parallelism of the algorithm and its subsequent implementation on a manycore architecture. As a result, an average speedup of 13 has been achieved when compared to the sequential version. Additionally, this implementation has been compared with other state-of-the-art applications, outperforming them in terms of performance.
conference on design and architectures for signal and image processing | 2017
Daniel Madroñal; Raquel Lazcano; Himar Fabelo; Samuel Ortega; Rubén Salvador; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
In this paper, a Massively Parallel Processor Array platform is characterized in terms of energy consumption using a Support Vector Machine for hyperspectral image classification. This platform gathers 16 clusters composed of 16 cores each, i.e., 256 processors working in parallel. The objective of the work is to associate power dissipation and energy consumed by the platform with the different resources of the architecture. Experimenting with a hyperspectral SVM classifier, this study has been conducted using three strategies: i) modifying the number of processing elements, i.e., clusters and cores, ii) increasing system frequency, and iii) varying the number of active communication links during the analysis, i.e., I/Os and DMAs. As a result, a relationship between the energy consumption and the active platform resources has been exposed using two different parallelization strategies. Finally, the implementation that fully exploits the parallelization possibilities working at 500MHz has been proven to be also the most efficient one, as it reduces the energy consumption by 98% when compared to the sequential version running at 400MHz.
computing frontiers | 2017
Rubén Salvador; Samuel Ortega; Daniel Madroñal; Himar Fabelo; Raquel Lazcano; Gustavo Marrero; Eduardo Juárez; Roberto Sarmiento; César Sanz
The HELICoiD project is a European FP7 FET Open funded project. It is an interdisciplinary work at the edge of the biomedical domain, bringing together neurosurgeons, computer scientists and electronic engineers. The main target of the project was to provide a working demonstrator of an intraoperative image-guided surgery system for real-time brain cancer detection, in order to assist neurosurgeons during tumour resection procedures. One of the main problems associated to brain tumours is its infiltrative nature, which makes complete tumour resection a highly difficult task. With the combination of Hyperspectral Imaging and Machine Learning techniques, the project aimed at demonstrating that a precise determination of tumour boundaries was possible, helping this way neurosurgeons to minimize the amount of removed healthy tissue. The project partners involved, besides different universities and companies, two hospitals where the demonstrator was tested during surgical procedures. This paper introduces the difficulties around brain tumor resection, stating the main objectives of the project and presenting the materials, methodologies and platforms used to propose a solution. A brief summary of the main results obtained is also included.
Journal of Systems Architecture | 2017
Daniel Madroñal; Raquel Lazcano; Rubén Salvador; Himar Fabelo; Samuel Ortega; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
Abstract This paper presents a study of the design space of a Support Vector Machine (SVM) classifier with a linear kernel running on a manycore MPPA (Massively Parallel Processor Array) platform. This architecture gathers 256 cores distributed in 16 clusters working in parallel. This study aims at implementing a real-time hyperspectral SVM classifier, where real-time is defined as the time required to capture a hyperspectral image. To do so, two aspects of the SVM classifier have been analyzed: the classification algorithm and the system parallelization. On the one hand, concerning the classification algorithm, first, the classification model has been optimized to fit into the MPPA structure and, secondly, a probability estimation stage has been included to refine the classification results. On the other hand, the system parallelization has been divided into two levels: first, the parallelism of the classification has been exploited taking advantage of the pixel-wise classification methodology supported by the SVM algorithm and, secondly, a double-buffer communication procedure has been implemented to parallelize the image transmission and the cluster classification stages. Experimenting with medical images, an average speedup of 9 has been obtained using a single-cluster and double-buffer implementation with 16 cores working in parallel. As a result, a system whose processing time linearly grows with the number of pixels composing the scene has been implemented. Specifically, only 3 µs are required to process each pixel within the captured scene independently from the spatial resolution of the image.
ieee international conference on high performance computing data and analytics | 2016
Raquel Lazcano; I. Sidrach-Cardona; Daniel Madroñal; K. Desnos; M. Pelcat; Eduardo Juárez; César Sanz
Hyperspectral imaging (HI) collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. The tremendous development of this technology within the field of remote sensing has led to new research fields, such as cancer automatic detection or precision agriculture, but has also increased the performance requirements of the applications. For instance, strong time constraints need to be respected, since many applications imply real-time responses. Achieving real-time is a challenge, as hyperspectral sensors generate high volumes of data to process. Thus, so as to achieve this requisite, first the initial image data needs to be reduced by discarding redundancies and keeping only useful information. Then, the intrinsic parallelism in a system specification must be explicitly highlighted. In this paper, the PCA (Principal Component Analysis) algorithm is implemented using the RVC-CAL dataflow language, which specifies a system as a set of blocks or actors and allows its parallelization by scheduling the blocks over different processing units. Two implementations of PCA for hyperspectral images have been compared when aiming at obtaining the first few principal components: first, the algorithm has been implemented using the Jacobi approach for obtaining the eigenvectors; thereafter, the NIPALS-PCA algorithm, which approximates the principal components iteratively, has also been studied. Both implementations have been compared in terms of accuracy and computation time; then, the parallelization of both models has also been analyzed. These comparisons show promising results in terms of computation time and parallelization: the performance of the NIPALS-PCA algorithm is clearly better when only the first principal component is achieved, while the partitioning of the algorithm execution over several cores shows an important speedup for the PCA-Jacobi. Thus, experimental results show the potential of RVC–CAL to automatically generate implementations which process in real-time the large volumes of information of hyperspectral sensors, as it provides advanced semantics for exploiting system parallelization.
ieee international conference on high performance computing data and analytics | 2016
Daniel Madroñal; Himar Fabelo; Raquel Lazcano; Gustavo Marrero Callicó; Eduardo Juárez; César Sanz
Hyperspectral Imaging (HI) collects high resolution spectral information consisting of hundreds of bands across the electromagnetic spectrum –from the ultraviolet to the infrared range–. Thanks to this huge amount of information, an identification of the different elements that compound the hyperspectral image is feasible. Initially, HI was developed for remote sensing applications and, nowadays, its use has been spread to research fields such as security and medicine. In all of them, new applications that demand the specific requirement of real-time processing have appear. In order to fulfill this requirement, the intrinsic parallelism of the algorithms needs to be explicitly exploited. In this paper, a Support Vector Machine (SVM) classifier with a linear kernel has been implemented using a dataflow language called RVC-CAL. Specifically, RVC-CAL allows the scheduling of functional actors onto the target platform cores. Once the parallelism of the classifier has been extracted, a comparison of the SVM classifier implementation using LibSVM –a specific library for SVM applications– and RVC-CAL has been performed. The speedup results obtained for the image classifier depends on the number of blocks in which the image is divided; concretely, when 3 image blocks are processed in parallel, an average speed up above 2.50, with regard to the RVC-CAL sequential version, is achieved.