Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Imanol Luengo is active.

Publication


Featured researches published by Imanol Luengo.


Journal of Structural Biology | 2017

SuRVoS: Super-Region Volume Segmentation workbench

Imanol Luengo; Michele C. Darrow; Matthew C. Spink; Ying Sun; Wei Dai; Cynthia Y. He; Wah Chiu; Tony P. Pridmore; Alun Ashton; Elizabeth Duke; Mark Basham; Andrew P. French

Segmentation of biological volumes is a crucial step needed to fully analyse their scientific content. Not having access to convenient tools with which to segment or annotate the data means many biological volumes remain under-utilised. Automatic segmentation of biological volumes is still a very challenging research field, and current methods usually require a large amount of manually-produced training data to deliver a high-quality segmentation. However, the complex appearance of cellular features and the high variance from one sample to another, along with the time-consuming work of manually labelling complete volumes, makes the required training data very scarce or non-existent. Thus, fully automatic approaches are often infeasible for many practical applications. With the aim of unifying the segmentation power of automatic approaches with the user expertise and ability to manually annotate biological samples, we present a new workbench named SuRVoS (Super-Region Volume Segmentation). Within this software, a volume to be segmented is first partitioned into hierarchical segmentation layers (named Super-Regions) and is then interactively segmented with the users knowledge input in the form of training annotations. SuRVoS first learns from and then extends user inputs to the rest of the volume, while using Super-Regions for quicker and easier segmentation than when using a voxel grid. These benefits are especially noticeable on noisy, low-dose, biological datasets.


Journal of Visualized Experiments | 2017

Volume segmentation and analysis of biological materials using SuRVoS (Super-region Volume Segmentation) workbench

Michele C. Darrow; Imanol Luengo; Mark Basham; Matthew C. Spink; Sarah Irvine; Andrew P. French; Alun Ashton; Elizabeth Duke

Segmentation is the process of isolating specific regions or objects within an imaged volume, so that further study can be undertaken on these areas of interest. When considering the analysis of complex biological systems, the segmentation of three-dimensional image data is a time consuming and labor intensive step. With the increased availability of many imaging modalities and with automated data collection schemes, this poses an increased challenge for the modern experimental biologist to move from data to knowledge. This publication describes the use of SuRVoS Workbench, a program designed to address these issues by providing methods to semi-automatically segment complex biological volumetric data. Three datasets of differing magnification and imaging modalities are presented here, each highlighting different strategies of segmenting with SuRVoS. Phase contrast X-ray tomography (microCT) of the fruiting body of a plant is used to demonstrate segmentation using model training, cryo electron tomography (cryoET) of human platelets is used to demonstrate segmentation using super- and megavoxels, and cryo soft X-ray tomography (cryoSXT) of a mammalian cell line is used to demonstrate the label splitting tools. Strategies and parameters for each datatype are also presented. By blending a selection of semi-automatic processes into a single interactive tool, SuRVoS provides several benefits. Overall time to segment volumetric data is reduced by a factor of five when compared to manual segmentation, a mainstay in many image processing fields. This is a significant savings when full manual segmentation can take weeks of effort. Additionally, subjectivity is addressed through the use of computationally identified boundaries, and splitting complex collections of objects by their calculated properties rather than on a case-by-case basis.


british machine vision conference | 2016

SMURFS: superpixels from multi-scale refinement of super-regions

Imanol Luengo; Mark Basham; Andrew P. French

Recent applications in computer vision have come to rely on superpixel segmentation as a pre-processing step for higher level vision tasks, such as object recognition, scene labelling or image segmentation. Here, we present a new algorithm, Superpixels from MUlti-scale ReFinement of Super-regions (SMURFS), which not only obtains state-of-the-art superpixels, but can also be applied hierarchically to form what we call n-th order super-regions. In essence, starting from a uniformly distributed set of super-regions, the algorithm iteratively alternates graph-based split and merge optimization schemes which yield superpixels that better represent the image. The split step is performed over the pixel grid to separate large super-regions into different smaller superpixels. The merging process, conversely, is performed over the superpixel graph to create 2nd-order super-regions (super-segments). Iterative refinement over two scale of regions allows the algorithm to achieve better over-segmentation results than current state-of-the-art methods, as experimental results show on the public Berkeley Segmentation Dataset (BSD500).


Proceedings of SPIE | 2016

Fast global interactive volume segmentation with regional supervoxel descriptors

Imanol Luengo; Mark Basham; Andrew P. French

In this paper we propose a novel approach towards fast multi-class volume segmentation that exploits supervoxels in order to reduce complexity, time and memory requirements. Current methods for biomedical image segmentation typically require either complex mathematical models with slow convergence, or expensive-to-calculate image features, which makes them non-feasible for large volumes with many objects (tens to hundreds) of different classes, as is typical in modern medical and biological datasets. Recently, graphical models such as Markov Random Fields (MRF) or Conditional Random Fields (CRF) are having a huge impact in different computer vision areas (e.g. image parsing, object detection, object recognition) as they provide global regularization for multiclass problems over an energy minimization framework. These models have yet to find impact in biomedical imaging due to complexities in training and slow inference in 3D images due to the very large number of voxels. Here, we define an interactive segmentation approach over a supervoxel space by first defining novel, robust and fast regional descriptors for supervoxels. Then, a hierarchical segmentation approach is adopted by training Contextual Extremely Random Forests in a user-defined label hierarchy where the classification output of the previous layer is used as additional features to train a new classifier to refine more detailed label information. This hierarchical model yields final class likelihoods for supervoxels which are finally refined by a MRF model for 3D segmentation. Results demonstrate the effectiveness on a challenging cryo-soft X-ray tomography dataset by segmenting cell areas with only a few user scribbles as the input for our algorithm. Further results demonstrate the effectiveness of our method to fully extract different organelles from the cell volume with another few seconds of user interaction.


medical image computing and computer assisted intervention | 2018

DeepPhase: Surgical Phase Recognition in CATARACTS Videos

Odysseas Zisimopoulos; Evangello Flouty; Imanol Luengo; Petros Giataganas; Jean Nehme; Andre Chow; Danail Stoyanov

Automated surgical workflow analysis and understanding can assist surgeons to standardize procedures and enhance post-surgical assessment and indexing, as well as, interventional monitoring. Computer-assisted interventional (CAI) systems based on video can perform workflow estimation through surgical instruments’ recognition while linking them to an ontology of procedural phases. In this work, we adopt a deep learning paradigm to detect surgical instruments in cataract surgery videos which in turn feed a surgical phase inference recurrent network that encodes temporal aspects of phase steps within the phase classification. Our models present comparable to state-of-the-art results for surgical tool detection and phase recognition with accuracies of 99 and 78% respectively.


Microscopy and Microanalysis | 2017

Cryo Soft X-ray Tomography and Other Techniques at Diamond Light Source

Michele C. Darrow; Maria Harkiolaki; Matthew C. Spink; Imanol Luengo; M. Basham; Elizabeth Duke

Diamond Light Source is the UK’s national synchrotron facility, providing an intense source of X-ray, ultra violet and infra-red radiation to both the academic and industrial scientific community. Recently, a microscope dedicated to cryo soft X-ray tomography (cryoSXT) of biological samples has been commissioned. This microscope will allow data to be collected from cells and other similar samples vitrified on standard 3mm diameter electron microscopy grids. The X-ray microscope was purchased as part of a collaboration between Xradia (now Zeiss) and Diamond Light Source and delivered along with an off-line source of X-rays (nitrogen plasma source) in 2012. Since then the system has undergone extensive commissioning, including upgrading the light microscope used for sample selection to both increase the magnification and to incorporate the ability to measure fluorescence from suitably labelled samples. The microscope is now located on the beamline and using X-rays from one of the storage ring bending magnets for illumination rather than the plasma source. Currently, the beamline is operating a fixed energy (500eV) within the so-called “water window”, so image formation is via the naturally occurring absorption differential between carbon and oxygen [1, 2]. This allows for the imaging of whole, fully-hydrated cells, without sectioning or staining techniques, at approximately 40nm resolution.


International Workshop on Patch-based Techniques in Medical Imaging | 2016

Selective Labeling: Identifying Representative Sub-volumes for Interactive Segmentation

Imanol Luengo; Mark Basham; Andrew P. French

Automatic segmentation of challenging biomedical volumes with multiple objects is still an open research field. Automatic approaches usually require a large amount of training data to be able to model the complex and often noisy appearance and structure of biological organelles and their boundaries. However, due to the variety of different biological specimens and the large volume sizes of the datasets, training data is costly to produce, error prone and sparsely available. Here, we propose a novel Selective Labeling algorithm to overcome these challenges; an unsupervised sub-volume proposal method that identifies the most representative regions of a volume. This massively-reduced subset of regions are then manually labeled and combined with an active learning procedure to fully segment the volume. Results on a publicly available EM dataset demonstrate the quality of our approach by achieving equivalent segmentation accuracy with only 5 % of the training data.


machine vision applications | 2016

Leaf segmentation in plant phenotyping: a collation study

Hanno Scharr; Massimo Minervini; Andrew P. French; Christian Klukas; David M. Kramer; Xiaoming Liu; Imanol Luengo; Jean Michel Pape; Gerrit Polder; Danijela Vukadinovic; Xi Yin; Sotirios A. Tsaftaris


arXiv: Computer Vision and Pattern Recognition | 2016

Hierarchical Piecewise-Constant Super-regions.

Imanol Luengo; Mark Basham; Andrew P. French


british machine vision conference | 2018

SurReal: enhancing Surgical simulation Realism using style transfer.

Imanol Luengo; Evangello Flouty; Petros Giataganas; Piyamate Wisanuvej; Jean Nehme; Danail Stoyanov

Collaboration


Dive into the Imanol Luengo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michele C. Darrow

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Danail Stoyanov

University College London

View shared research outputs
Top Co-Authors

Avatar

Jean Nehme

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andre Chow

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge