Erik Smistad
Norwegian University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erik Smistad.
Medical Image Analysis | 2014
Rina Dewi Rudyanto; Sjoerd Kerkstra; Eva M. van Rikxoort; Catalin I. Fetita; Pierre-Yves Brillet; Christophe Lefevre; Wenzhe Xue; Xiangjun Zhu; Jianming Liang; Ilkay Oksuz; Devrim Unay; Kamuran Kadipaşaogˇlu; Raúl San José Estépar; James C. Ross; George R. Washko; Juan-Carlos Prieto; Marcela Hernández Hoyos; Maciej Orkisz; Hans Meine; Markus Hüllebrand; Christina Stöcker; Fernando Lopez Mir; Valery Naranjo; Eliseo Villanueva; Marius Staring; Changyan Xiao; Berend C. Stoel; Anna Fabijańska; Erik Smistad; Anne C. Elster
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
Journal of Real-time Image Processing | 2015
Erik Smistad; Anne C. Elster; Frank Lindseth
The Gradient Vector Flow (GVF) is a feature-preserving spatial diffusion of gradients. It is used extensively in several image segmentation and skeletonization algorithms. Calculating the GVF is slow as many iterations are needed to reach convergence. However, each pixel or voxel can be processed in parallel for each iteration. This makes GVF ideal for execution on Graphic Processing Units (GPUs). In this paper, we present a highly optimized parallel GPU implementation of GVF written in OpenCL. We have investigated memory access optimization for GPUs, such as using texture memory, shared memory and a compressed storage format. Our results show that this algorithm really benefits from using the texture memory and the compressed storage format on the GPU. Shared memory, on the other hand, makes the calculations slower with or without the other optimizations because of an increased kernel complexity and synchronization. With these optimizations our implementation can process 2D images of large sizes (5122) in real-time and 3D images (2563) using only a few seconds on modern GPUs.
International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis | 2016
Erik Smistad; Lasse Lovstakken
Deep convolutional neural networks have achieved great results on image classification problems. In this paper, a new method using a deep convolutional neural network for detecting blood vessels in B-mode ultrasound images is presented. Automatic blood vessel detection may be useful in medical applications such as deep venous thrombosis detection, anesthesia guidance and catheter placement. The proposed method is able to determine the position and size of the vessels in images in real-time. 12,804 subimages of the femoral region from 15 subjects were manually labeled. Leave-one-subject-out cross validation was used giving an average accuracy of 94.5 %, a major improvement from previous methods which had an accuracy of 84 % on the same dataset. The method was also validated on a dataset of the carotid artery to show that the method can generalize to blood vessels on other regions of the body. The accuracy on this dataset was 96 %.
IEEE Transactions on Medical Imaging | 2016
Erik Smistad; Frank Lindseth
The goal is to create an assistant for ultrasound- guided femoral nerve block. By segmenting and visualizing the important structures such as the femoral artery, we hope to improve the success of these procedures. This article is the first step towards this goal and presents novel real-time methods for identifying and reconstructing the femoral artery, and registering a model of the surrounding anatomy to the ultrasound images. The femoral artery is modelled as an ellipse. The artery is first detected by a novel algorithm which initializes the artery tracking. This algorithm is completely automatic and requires no user interaction. Artery tracking is achieved with a Kalman filter. The 3D artery is reconstructed in real-time with a novel algorithm and a tracked ultrasound probe. A mesh model of the surrounding anatomy was created from a CT dataset. Registration of this model is achieved by landmark registration using the centerpoints from the artery tracking and the femoral artery centerline of the model. The artery detection method was able to automatically detect the femoral artery and initialize the tracking in all 48 ultrasound sequences. The tracking algorithm achieved an average dice similarity coefficient of 0.91, absolute distance of 0.33 mm, and Hausdorff distance 1.05 mm. The mean registration error was 2.7 mm, while the average maximum error was 12.4 mm. The average runtime was measured to be 38, 8, 46 and 0.2 milliseconds for the artery detection, tracking, reconstruction and registration methods respectively.
PLOS ONE | 2015
Pall Jens Reynisson; Marta Scali; Erik Smistad; Erlend Fagertun Hofstad; Håkon Olav Leira; Frank Lindseth; Toril A. Nagelhus Hernes; Tore Amundsen; Hanne Sorger; Thomas Langø
Introduction Our motivation is increased bronchoscopic diagnostic yield and optimized preparation, for navigated bronchoscopy. In navigated bronchoscopy, virtual 3D airway visualization is often used to guide a bronchoscopic tool to peripheral lesions, synchronized with the real time video bronchoscopy. Visualization during navigated bronchoscopy, the segmentation time and methods, differs. Time consumption and logistics are two essential aspects that need to be optimized when integrating such technologies in the interventional room. We compared three different approaches to obtain airway centerlines and surface. Method CT lung dataset of 17 patients were processed in Mimics (Materialize, Leuven, Belgium), which provides a Basic module and a Pulmonology module (beta version) (MPM), OsiriX (Pixmeo, Geneva, Switzerland) and our Tube Segmentation Framework (TSF) method. Both MPM and TSF were evaluated with reference segmentation. Automatic and manual settings allowed us to segment the airways and obtain 3D models as well as the centrelines in all datasets. We compared the different procedures by user interactions such as number of clicks needed to process the data and quantitative measures concerning the quality of the segmentation and centrelines such as total length of the branches, number of branches, number of generations, and volume of the 3D model. Results The TSF method was the most automatic, while the Mimics Pulmonology Module (MPM) and the Mimics Basic Module (MBM) resulted in the highest number of branches. MPM is the software which demands the least number of clicks to process the data. We found that the freely available OsiriX was less accurate compared to the other methods regarding segmentation results. However, the TSF method provided results fastest regarding number of clicks. The MPM was able to find the highest number of branches and generations. On the other hand, the TSF is fully automatic and it provides the user with both segmentation of the airways and the centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature. Conclusion The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system.
International MICCAI Workshop on Computational and Clinical Challenges in Abdominal Imaging | 2014
Erik Smistad; Reidar Brekken; Frank Lindseth
Tube detection filters (TDFs) are useful for segmentation and centerline extraction of tubular structures such as blood vessels and airways in medical images. Most TDFs assume that the cross-sectional profile of the tubular structure is circular. This assumption is not always correct, for instance in the case of abdominal aortic aneurysms (AAAs). Another problem with several TDFs is that they give a false response at strong edges. In this paper, a new TDF is proposed and compared to other TDFs on synthetic and clinical datasets. The results show that the proposed TDF is able to detect large non-circular tubular structures such as AAAs and avoid false positives.
Journal of Real-time Image Processing | 2016
Erik Smistad; Frank Lindseth
Gradient vector flow (GVF) is a feature-preserving spatial diffusion of image gradients. It was introduced to overcome the limited capture range in traditional active contour segmentation. However, the original iterative solver for GVF, using Euler’s method, converges very slowly. Thus, many iterations are needed to achieve the desired capture range. Several groups have investigated the use of graphic processing units (GPUs) to accelerate the GVF computation. Still, this does not reduce the number of iterations needed. Multigrid methods, on the other hand, have been shown to provide a much better capture range using considerable less iterations. However, non-GPU implementations of the multigrid method are not as fast as the Euler method when executed on the GPU. In this paper, a novel GPU implementation of a multigrid solver for GVF written in OpenCL is presented. The results show that this implementation converges and provides a better capture range about 2–5 times faster than the conventional iterative GVF solver on the GPU.
Proceedings of SPIE | 2017
Frank Lindseth; Marte Nordrik Hallan; Martin Schiller Tønnessen; Erik Smistad; Cecilie Våpenstad
Introduction: Medical imaging technology has revolutionized health care over the past 30 years. This is especially true for ultrasound, a modality that an increasing amount of medical personal is starting to use. Purpose: The purpose of this study was to develop and evaluate a platform for improving medical image interpretation skills regardless of time and space and without the need for expensive imaging equipment or a patient to scan. Methods, results and conclusions: A stable web application with the needed functionality for image interpretation training and evaluation has been implemented. The system has been extensively tested internally and used during an international course in ultrasound-guided neurosurgery. The web application was well received and got very good System Usability Scale (SUS) scores.
DLMIA/ML-CDS@MICCAI | 2018
Andreas Ostvik; Erik Smistad; Torvald Espeland; Erik Andreas Rye Berg; Lasse Lovstakken
Recent studies in the field of deep learning suggest that motion estimation can be treated as a learnable problem. In this paper we propose a pipeline for functional imaging in echocardiography consisting of four central components, (i) classification of cardiac view, (ii) semantic partitioning of the left ventricle (LV) myocardium, (iii) regional motion estimates and (iv) fusion of measurements. A U-Net type of convolutional neural network (CNN) was developed to classify muscle tissue, and partitioned into a semantic measurement kernel based on LV length and ventricular orientation. Dense tissue motion was predicted using stacked U-Net architectures with image warping of intermediate flow, designed to tackle variable displacements. Training was performed on a mixture of real and synthetic data. The resulting segmentation and motion estimates was fused in a Kalman filter and used as basis for measuring global longitudinal strain. For reference, 2D ultrasound images from 21 subjects were acquired using a GE Vivid system. Data was analyzed by two specialists using a semi-automatic tool for longitudinal function estimates in a commercial system, and further compared to output of the proposed method. Qualitative assessment showed comparable deformation trends as the clinical analysis software. The average deviation for the global longitudinal strain was (\(-0.6\pm 1.6\))% for apical four-chamber view. The system was implemented with Tensorflow, and working in an end-to-end fashion without any ad-hoc tuning. Using a modern graphics processing unit, the average inference time is estimated to (\(115\pm 3\)) ms per frame.
internaltional ultrasonics symposium | 2017
Andreas Ostvik; Erik Smistad; Svein Arne Aase; Bjørn Olav Haugen; Lasse Lovstakken
Echocardiograms are acquired from standard views to ensure correct assessment of cardiac function. There is an increasing use of quantitative tools where specific views are required. Further, non-expert users of echocardiography are increasing, and thus a need for quality assurance during imaging. The aim of this project is to develop automatic and robust real-time classification of cardiac views based on 2D B-mode images.