Arash Pourtaherian
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arash Pourtaherian.
international conference on image processing | 2014
Arash Pourtaherian; S Sveta Zinger; H.H.M. Korsten; Nenad Mihajlovic
During needle interventions for e.g. regional anaesthesia or biopsy, it is very important to visualize the needle and its tip with respect to important structures in the body. In this work, we propose a novel image-based needle detection technique in a 3D ultrasound volume dataset, which can improve the intervention. We present a novel application of the 3D Gabor transformation, which exploits needle-like structures with appropriate designs. Furthermore, we introduce a needle tracking algorithm based on Gradient Descent and show that it limits the computational complexity and detection error. Finally, we visualize the needle on 2D cross-sections of the volume in order to be presented to the physician. Evaluation of our system in challenging cases shows a high detection score (up to 100% but needs larger sets) and accurate visualization.
IEEE Transactions on Medical Imaging | 2017
Arash Pourtaherian; Harm J. Scholten; Lieneke Kusters; S Sveta Zinger; Nenad Mihajlovic; Alexander Franciscus Kolen; Fei Zuo; Gary C. Ng; H.H.M. Korsten
Ultrasound-guided medical interventions are broadly applied in diagnostics and therapy, e.g., regional anesthesia or ablation. A guided intervention using 2-D ultrasound is challenging due to the poor instrument visibility, limited field of view, and the multi-fold coordination of the medical instrument and ultrasound plane. Recent 3-D ultrasound transducers can improve the quality of the image-guided intervention if an automated detection of the needle is used. In this paper, we present a novel method for detecting medical instruments in 3-D ultrasound data that is solely based on image processing techniques and validated on various ex vivo and in vivo data sets. In the proposed procedure, the physician is placing the 3-D transducer at the desired position, and the image processing will automatically detect the best instrument view, so that the physician can entirely focus on the intervention. Our method is based on the classification of instrument voxels using volumetric structure directions and robust approximation of the primary tool axis. A novel normalization method is proposed for the shape and intensity consistency of instruments to improve the detection. Moreover, a novel 3-D Gabor wavelet transformation is introduced and optimally designed for revealing the instrument voxels in the volume, while remaining generic to several medical instruments and transducer types. Experiments on diverse data sets, including in vivo data from patients, show that for a given transducer and an instrument type, high detection accuracies are achieved with position errors smaller than the instrument diameter in the 0.5–1.5-mm range on average.
Anaesthesia | 2017
Harm J. Scholten; Arash Pourtaherian; Nenad Mihajlovic; H.H.M. Korsten; R.A. Bouwman
Ultrasound guidance is becoming standard practice for needle‐based interventions in anaesthetic practice, such as vascular access and peripheral nerve blocks. However, difficulties in aligning the needle and the transducer can lead to incorrect identification of the needle tip, possibly damaging structures not visible on the ultrasound screen. Additional techniques specifically developed to aid alignment of needle and probe or identification of the needle tip are now available. In this scoping review, advantages and limitations of the following categories of those solutions are presented: needle guides; alterations to needle or needle tip; three‐ and four‐dimensional ultrasound; magnetism, electromagnetic or GPS systems; optical tracking; augmented (virtual) reality; robotic assistance; and automated (computerised) needle detection. Most evidence originates from phantom studies, case reports and series, with few randomised clinical trials. Improved first‐pass success and reduced performance time are the most frequently cited benefits, whereas the need for additional and often expensive hardware is the greatest limitation to widespread adoption. Novice ultrasound users seem to benefit most and great potential lies in education. Future research should focus on reporting relevant clinical parameters to learn which technique will benefit patients most in terms of success and safety.
internaltional ultrasonics symposium | 2016
Arash Pourtaherian; Nenad Mihajlovic; S Sveta Zinger; H.H.M. Korsten; Jinfeng Huang; Gary C. Ng
During ultrasound-guided needle interventions, low signal-to-noise ratio and poor needle visibility limit the performance of automated detection systems. This becomes even more challenging when the needle is inserted at higher angles with respect to the ultrasound probe. For very large insertion angles, the needle becomes virtually invisible in the ultrasound data and medical specialists need to find the needle indirectly either from out-of-plane or in-plane views. In this paper, we propose a novel method to automatically detect steep needles in 3D ultrasound data and visualize its 2D in-plane view to the medical specialist. Our method exploits indirect information regarding the presence of a needle in the volume by examining the shadow traces of structures. The proposed algorithm successfully detects the needle plane with high accuracy for all the ten measured datasets. Furthermore, the full-length needle and its tip are always visible in the extracted scan planes. The proposed method is efficient and robust to noise and artifacts, thereby strongly supporting the clinical intervention and eliminating the need for external tracking devices.
Proceedings of SPIE | 2015
Arash Pourtaherian; S Sveta Zinger; H.H.M. Korsten; Nenad Mihajlovic
Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.
medical image computing and computer assisted intervention | 2017
Arash Pourtaherian; Farhad Ghazvinian Zanjani; S Sveta Zinger; Nenad Mihajlovic; Gary C. Ng; H.H.M. Korsten
Successful automated detection of short needles during an intervention is necessary to allow the physician identify and correct any misalignment of the needle and the target at early stages, which reduces needle passes and improves health outcomes. In this paper, we present a novel approach to detect needle voxels in 3D ultrasound volume with high precision using convolutional neural networks. Each voxel is classified from locally-extracted raw data of three orthogonal planes centered on it. We propose a bootstrap re-sampling approach to enhance the training in our highly imbalanced data. The proposed method successfully detects 17G and 22G needles with a single trained network, showing a robust generalized approach. Extensive ex-vivo evaluations on 3D ultrasound datasets of chicken breast show 25% increase in F1-score over the state-of-the-art feature-based method. Furthermore, very short needles inserted for only 5 mm in the volume are detected with tip localization errors of \({<}\)0.5 mm, indicating that the tip is always visible in the detected plane.
international conference on image processing | 2013
Arash Pourtaherian; Rgj Rob Wijnhoven
We present a tracking framework in which we learn a HOG-based object detector in the first video frame and use this detector to localize the object in subsequent frames. We contribute and improve the tracking on the three following points. First, an occlusion-handling algorithm exploits discriminative information from the detector by dividing the object bounding box into patches and comparing each patch to the object model. Second, a drift-correction technique uses descriptive information of the object by calculating the similarity between the object in the previous frame and its shifted versions in the current frame. Third, a stochastic learning algorithm updates the object detector using single object and single background samples for selected frames only. Experiments with benchmark sequences show that the proposed tracker outperforms state-of-the-art methods on several sequences and has the smallest average location error.
machine vision applications | 2018
Yue Sun; Caifeng Shan; Tao Tan; X Xi Long; Arash Pourtaherian; S Sveta Zinger
Infants are particularly vulnerable to the effects of pain and discomfort, which can lead to abnormal brain development, yielding long-term adverse neurodevelopmental outcomes. In this study, we propose a video-based method for automated detection of their discomfort. The infant face is first detected and normalized. A two-phase classification workflow is then employed, where Phase 1 is subject-independent, and Phase 2 is subject-dependent. Phase 1 derives geometric and appearance features, while Phase 2 incorporates facial landmark-based template matching. An SVM classifier is finally applied to video frames to recognize facial expressions of comfort or discomfort. The method is evaluated using videos from 22 infants. Experimental results show an AUC of 0.87 for the subject-independent phase and 0.97 for the subject-dependent phase, which is promising for clinical use.
Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling | 2018
Hongxu Yang; Arash Pourtaherian; Caifeng Shan; Alexander Franciscus Kolen
The usage of three-dimensional ultrasound (3D US) during image-guided interventions for e.g. cardiac catheterization has increased recently. To accurately and consistently detect and track catheters or guidewires in the US image during the intervention, additional training of the sonographer or physician is needed. As a result, image-based catheter detection can be beneficial to the sonographer to interpret the position and orientation of a catheter in the 3D US volume. However, due to the limited spatial resolution of 3D cardiac US and complex anatomical structures inside the heart, image-based catheter detection is challenging. In this paper, we study 3D image features for image-based catheter detection using supervised learning methods. To better describe the catheter in 3D US, we extend the Frangi vesselness feature into a multi-scale Objectness feature and a Hessian element feature, which extract more discriminative information about catheter voxels in a 3D US volume. In addition, we introduce a multi-scale statistical 3D feature to enrich and enhance the information for voxel-based classification. Extensive experiments on several in-vitro and ex-vivo datasets show that our proposed features improve the precision to at least 69% when compared to the traditional multi-scale Frangi features (from 45% to 76% at a high recall rate 75%). As for clinical application, the high accuracy of voxel-based classification enables more robust catheter detection in complex anatomical structures.
Medical Imaging 2018: Image-Guided Procedures, Robotic Interventions, and Modeling | 2018
Farhad Ghazvinian Zanjani; Arash Pourtaherian; Xikai Tang; Svitlana Zinger; Nenad Mihajlovic; Gary C. Ng; H.H.M. Korsten
3D ultrasound (US) transducers will improve the quality of image-guided medical interventions if an automated detection of the needle becomes possible. Image-based detection of the needle is challenging due to the presence of other echogenic structures in the acquired data, inconsistent visibility of needle parts and the low quality in US imaging. As the currently applied approaches for needle detection classify each voxel individually, they do not consider the global relations between the voxels. In this work, we introduce coherent needle labeling by using dense conditional random fields over a volume, along with 3D space-frequency features. The proposal includes long-distance dependencies in voxel pairs according to their similarities in the feature space and their spatial distance. This post-processing stage leads to better label assignment of volume voxels and a more compact and coherent segmented region. Our ex-vivo experiments based on measuring the F-1, F-2 and IoU scores show that the performance improves a significant 10-20 % compared with only using the linear SVM as a baseline for voxel classification.