Erik Steen
Norwegian Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Erik Steen.
IEEE Transactions on Medical Imaging | 1994
Erik Steen; Bjorn Olstad
The authors explore the application of volume rendering in medical ultrasonic imaging. Several volume rendering methods have been developed for X-ray computed tomography (X-CT), magnetic resonance imaging (MRI) and positron emission tomography (PET). Limited research has been done on applications of volume rendering techniques in medical ultrasound imaging because of a general lack of adequate equipment for 3D acquisitions. Severe noise sources and other limitations in the imaging system make volume rendering of ultrasonic data a challenge compared to rendering of MRI and X-CT data. Rendering algorithms that rely on an initial classification of the data into different tissue categories have been developed for high quality X-CT and MR-data. So far, there is a lack of general and reliable methods for tissue classification in ultrasonic imaging. The authors focus on volume rendering methods which are not dependent on any classification into different tissue categories. Instead, features are extracted from the original 3D data-set, and projected onto the view plane. The authors found that some of these methods may give clinically useful information which is very difficult to get from ordinary 2D ultrasonic images, and in some cases renderings with very fine structural details. The authors have applied the methods to 3D ultrasound images from fetal examinations. The methods are now in use as clinical tools at the National Center of Fetal Medicine in Trondheim, Norway.
Ultrasound in Medicine and Biology | 1999
Sevald Berg; Hans Torp; Ditlef Martens; Erik Steen; Stein Samstad; Inge Høivik; Bjorn Olstad
In this paper, we present a new method for simple acquisition of dynamic three-dimensional (3-D) ultrasound data. We used a magnetic position sensor device attached to the ultrasound probe for spatial location of the probe, which was slowly tilted in the transthoracic scanning position. The 3-D data were recorded in 10-20 s, and the analysis was performed on an external PC within 2 min after transferring the raw digital ultrasound data directly from the scanner. The spatial and temporal resolutions of the reconstruction were evaluated, and were superior to video-based 3-D systems. Examples of volume reconstructions with better than 7 ms temporal resolution are given. The raw data with Doppler measurements were used to reconstruct both blood and tissue velocity volumes. The velocity estimates were available for optimal visualization and for quantitative analysis. The freehand data reconstruction accuracy was tested by volume estimation of balloon phantoms, giving high correlation with true volumes. Results show in vivo 3-D reconstruction and visualization of mitral and aortic valve morphology and blood flow, and myocardial tissue velocity. We conclude that it was possible to construct multimodality 3-D data in a limited region of the human heart within one respiration cycle, with reconstruction errors smaller than the resolution of the original ultrasound beam, and with a temporal resolution of up to 150 frames per second.
Computerized Medical Imaging and Graphics | 1995
Marit Holden; Erik Steen; Arvid Lundervold
In this study we focus on the problem of segmentation and visualization of soft tissue structures in three-dimensional (3D) magnetic resonance (MR) imaging. We introduce a classification method which is a combination of a recently proposed contour detection algorithm and Hasletts contextual classification method extended to 3D. This classification method is used in the classification step of a rendering model suggested by Drebin et al. for visualizing normal and pathological tissue structures in the brain. We evaluate the combination of these two methodologies, and identify some problems which have to be solved in order to develop a clinical useful tool.
Medical Imaging 1994: Image Processing | 1994
Erik Steen; Bjoern Olstad
In this paper we develop a strategy for scale-space filtering and boundary detection in medical ultrasonic imaging. The strategy integrates a signal model for displayed ultrasonic images with the nonlinear anisotropic diffusion. The usefulness of the strategy is demonstrated for applications in volume rendering and automatic contour detection. The discrete implementation of anisotropic diffusion is based on a minimal nonlinear basis filter which is iterated on the input image. The filtering scheme involves selection of a threshold parameter which defines the overall noise level and the magnitude of gradients to be preserved. In displayed ultrasonic images the speckle noise is assumed to be signal dependent, and we have therefore developed a scheme which adaptively adjusts the threshold parameter as a function of the local signal level. The anisotropic diffusion process tends to produce artificially sharp edges and artificial boundary corners. Another modification has therefore been made to avoid edge-enhancement by leaving significant monotone sections unaltered. The proposed filtering strategy is evaluated both for synthetic images and real ultrasonic images.
internaltional ultrasonics symposium | 2010
Gabriel Kiss; Erik Steen; Jon Petter Åsen; Hans Torp
Since real-time acquisition of 3D echocardiographic data is achievable in practice, many volume rendering algorithms have been proposed for visualization purposes. However, due to the large amounts of data and computations involved a tradeoff between image quality and computational efficiency has to be made. The main goal of our study was to generate high quality volume renderings in real-time, by implementing preprocessing and ray-casting algorithms directly on the GPU. Furthermore the advantage of combining a-priori anatomic and functional information with the volume rendered image was also investigated. The proposed algorithms were implemented both in CUDA and OpenCL and validated on patient datasets acquired using a GE Vivid7 Dimensions system. Assuming a 512×512 pixels output resolution, average running times of 4.2 ms/frame are achievable on high-end graphics systems. Furthermore a good correspondence between wall thickening and segmental longitudinal strain values was visually observed. By implementing ray-casting on the GPU, the overall processing time is significantly reduced, thus making real-time interactive 3D volume rendering feasible. Combining anatomical and functional information allows for a quick visual assessment of a given case.
internaltional ultrasonics symposium | 1994
Erik Steen; Bjorn Olstad; Sevald Berg; Gaute Myklebust; K.P. Schipper
In vivo studies of the liver function have been conducted. The data acquisitions have utilized the TomTec Echoscan system to obtain 3D tissue data in addition to ECG and respiration triggered 4D angio and tissue data. Several different algorithmic tools were developed for visualization of tumor and vessel geometry. A fuzzy region growing technique was developed to segment the liver into different anatomical parts. A filtering algorithm was developed to smooth out local intensity variations within the vessels. Different volume rendering techniques were evaluated for visualization
internaltional ultrasonics symposium | 2005
Svein Brekke; Stein Inge Rabben; A. Haugen; G.U. Haugen; Erik Steen; Hans Torp
Three-dimensional (3D) echocardiography is chal- lenging due to limitation of the data acquisition rate caused by the speed of sound. ECG-gated stitching of data from several cardiac cycles is a possible technique to achieve higher resolution. The aim of this work is two-fold: Firstly, to provide a method for real-time presentation of stitched echocardiographic images acquired over several cardiac cycles, and secondly, to quantify the decrease of geometrical distortion achieved by stitched over non-stitched acquisition of 3D ultrasonic data of the left ventricle (LV). We present a volume stitching algorithm that merges data from N consecutive heart cycles into an assembled data volume which is volume-rendered in real-time. Furthermore, we propose a measure for the geometrical distortion present in ultrasound images. The distortion is calculated for images simulated by sampling a kinematic model of the LV wall. The impact of geometrical distortion on volume measurements is discussed. We conclude that displaying stitched 4D ultrasound data in real-time is feasible and an adequate technique for increasing the volume acquisition rate at a given spatial resolution, and that the geometrical distortion decreases substantially for data with higher volume rate. For a full scan of the LV, stitching over at least four cycles is recommended.
computer-based medical systems | 1995
Gaute Myklebust; Jon G. Solheim; Erik Steen
We present the results of parallel implementations of Kohonens self-organizing maps using data partitioning. Two algorithms are implemented, a pure data partitioning algorithm and a combined data- and network-partitioning algorithm. The performance of the algorithms is far better for small neural networks than the performance of our previous SOM implementations. The SOM model can be used for visualization of MR images, an application with a small number of neurons. Using one of the proposed algorithms, the performance of this application is increased by over 200%. The convergence rate of the proposed algorithm and the original algorithm is shown to be similar when the frequency of the weight update is properly selected.<<ETX>>
internaltional ultrasonics symposium | 2005
J. Hansegard; S. Urheim; Erik Steen; Hans Torp; Bjorn Olstad; S. Malm; Stein Inge Rabben
We report a new algorithm for detecting the LV myocardial boundary from simultaneously acquired triplane US image sequences using Multi View Active Appearance Motion Models. Coupled boundary detection in three planes can po- tentially increase the accuracy of LV volume measurements, and also increase the robustness of the boundary detection over traditional methods. A database of triplane image sequences from full cardiac cycles, including the standard A4CH, A2CH, and ALAX views were established from 20 volunteers, including 12 healthy persons and 8 persons suffering from heart disease. For each dataset the LV myocardial boundary was manually outlined, and the ED and ES frames were determined visually for phase normalization of the cycles. The evaluation of the MVAAMM was performed using a leave one out approach. The mean point distance between manually and automatically determined contours were 4.1±1.9 mm, the volume error was 7.0±14 ml, and fractional volume error was 8.5±16 %. Volume detection using the automatic method showed excellent correlation to the manual method (R 2 =0.87). Common ultrasound artefacts such as dropouts were handled well by the MVAAMM since the detection in the three image planes were coupled. The views with the largest point distance had one or more foreshortened views. A larger training database may improve the performance in such cases.
Proceedings of SPIE | 2012
Jon Petter Åsen; Erik Steen; Gabriel Kiss; Anders Thorstensen; Stein Inge Rabben
In this paper we introduce and investigate an adaptive direct volume rendering (DVR) method for real-time visualization of cardiac 3D ultrasound. DVR is commonly used in cardiac ultrasound to visualize interfaces between tissue and blood. However, this is particularly challenging with ultrasound images due to variability of the signal within tissue as well as variability of noise signal within the blood pool. Standard DVR involves a global mapping of sample values to opacity by an opacity transfer function (OTF). While a global OTF may represent the interface correctly in one part of the image, it may result in tissue dropouts, or even artificial interfaces within the blood pool in other parts of the image. In order to increase correctness of the rendered image, the presented method utilizes blood pool statistics to do regional adjustments of the OTF. The regional adaptive OTF was compared with a global OTF in a dataset of apical recordings from 18 subjects. For each recording, three renderings from standard views (apical 4-chamber (A4C), inverted A4C (IA4C) and mitral valve (MV)) were generated for both methods, and each rendering was tuned to the best visual appearance by a physician echocardiographer. For each rendering we measured the mean absolute error (MAE) between the rendering depth buffer and a validated left ventricular segmentation. The difference d in MAE between the global and regional method was calculated and t-test results are reported with significant improvements for the regional adaptive method (dA4C = 1.5 ± 0.3 mm, dIA4C = 2.5 ± 0.4 mm, dMV = 1.7 ± 0.2 mm, d.f. = 17, all p < 0.001). This improvement by the regional adaptive method was confirmed through qualitative visual assessment by an experienced physician echocardiographer who concluded that the regional adaptive method produced rendered images with fewer tissue dropouts and less spurious structures inside the blood pool in the vast majority of the renderings. The algorithm has been implemented on a GPU, running an average of 16 fps with a resolution of 512x512x100 samples (Nvidia GTX460).