Jon Petter Åsen
Norwegian University of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jon Petter Åsen.
IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control | 2014
Jon Petter Åsen; Jo Inge Buskenes; Carl-Inge Colombo Nilsen; Andreas Austeng; Sverre Holm
Capon beamforming is associated with a high computational complexity, which limits its use as a real-time method in many applications. In this paper, we present an implementation of the Capon beamformer that exhibits realtime performance when applied in a typical cardiac ultrasound imaging setting. To achieve this performance, we make use of the parallel processing power found in modern graphics processing units (GPUs), combined with beamspace processing to reduce the computational complexity as the number of array elements increases. For a three-dimensional beamspace, we show that processing rates supporting real-time cardiac ultrasound imaging are possible, meaning that images can be processed faster than the image acquisition rate for a wide range of parameters. Image quality is investigated in an in vivo cardiac data set. These results show that Capon beamforming is feasible for cardiac ultrasound imaging, providing images with improved lateral resolution both in element-space and beamspace.
internaltional ultrasonics symposium | 2014
Ole Marius Hoel Rindal; Jon Petter Åsen; Sverre Holm; Andreas Austeng
This paper investigates the contrast improvements seen when using adaptive (Capon) beamforming to create ultrasound images. Ultrasound images of cysts have been simulated using linear array imaging in Field II. The contrast of the cyst compared to the speckle surrounding it has been investigated, especially the improved edges produced by the Capon beamformer. We show that it is the improved edges that cause the contrast improvements. The resulting beampattern from the Capon beamformer has been compared to the beampattern from the conventional delay and sum (DAS) beamformer with different apodizations to show how the improved edges are generated. Finally, a bright inclusion was added to the simulation to demonstrate how the visual contrast changes when the image contains multiple intensity plateaus.
IEEE Journal of Oceanic Engineering | 2015
Jo Inge Buskenes; Jon Petter Åsen; Carl-Inge Colombo Nilsen; Andreas Austeng
The minimum variance distortionless response (MVDR) beamformer has recently been proposed as an attractive alternative to conventional beamformers in active sonar imaging. Unfortunately, it is very computationally complex because a spatial covariance matrix must be estimated and inverted for each image pixel. This may discourage its unnecessary use in sonar systems which are continuously being pushed to ever higher imaging ranges and resolutions. In this study, we show that for active sonar systems up to 32 channels, the computation time can be significantly reduced by performing arithmetic optimizations, and by implementing the MVDR beamformer on a graphics processing unit (GPU). We point out important hardware limitations for these devices, and assess the design in terms of how efficiently it is able to use the GPUs resources. On a quad-core Intel Xeon system with a high-end Nvidia GPU, our GPU implementation renders more than a million pixels per second (1 MP/s). Compared to our initial central processing unit (CPU) implementation, the optimizations described herein led to a speedup of more than two orders of magnitude, or an expected five to ten times improvement had the CPU received similar optimization effort. This throughput enables real-time processing of sonar data, and makes the MVDR a viable alternative to conventional methods in practical systems.
internaltional ultrasonics symposium | 2010
Gabriel Kiss; Erik Steen; Jon Petter Åsen; Hans Torp
Since real-time acquisition of 3D echocardiographic data is achievable in practice, many volume rendering algorithms have been proposed for visualization purposes. However, due to the large amounts of data and computations involved a tradeoff between image quality and computational efficiency has to be made. The main goal of our study was to generate high quality volume renderings in real-time, by implementing preprocessing and ray-casting algorithms directly on the GPU. Furthermore the advantage of combining a-priori anatomic and functional information with the volume rendered image was also investigated. The proposed algorithms were implemented both in CUDA and OpenCL and validated on patient datasets acquired using a GE Vivid7 Dimensions system. Assuming a 512×512 pixels output resolution, average running times of 4.2 ms/frame are achievable on high-end graphics systems. Furthermore a good correspondence between wall thickening and segmental longitudinal strain values was visually observed. By implementing ray-casting on the GPU, the overall processing time is significantly reduced, thus making real-time interactive 3D volume rendering feasible. Combining anatomical and functional information allows for a quick visual assessment of a given case.
internaltional ultrasonics symposium | 2012
Jon Petter Åsen; Jo Inge Buskenes; Carl-Inge Colombo Nilsen; Andreas Austeng; Sverre Holm
Recent work on Capon beamforming suggest that it can provide increased lateral resolution when applied in a medical ultrasound imaging setting. In this paper, the high computational complexity of the Capon beamformer is targeted with the use of a Graphics Processing Unit (GPU). In-vivo results with Capon beamforming applied on a full cardiac cycle in addition to simulations are presented. With the GPU we are able to process a 70° sector cardiac image from a 64 element phased array at interactive frame rates using both spatial and temporal smoothing. For a typical cardiac ultrasound image of 80 × 400 pixels (70° sector, 15 cm range) acquired using a 2.5 MHz, M=64 element phased array, we obtain 10 fps (subarray length L=M/2, temporal smoothing over 3 samples). If we perform a 2-element pre-beamforming, the channel count is reduced to 32, and frame rate is increased to 44 fps. For a 32 element phased array we need less beams to cover the sector (40 × 400 pixels), hence with the same parameters the frame rate increases to 87 fps. The target GPU was the Nvidias Quadro 6000, capable of 1 Tflops.
IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control | 2014
Jon Petter Åsen; Andreas Austeng; Sverre Holm
If an ultrasound imaging system provides a presentation of a moving object which is sensitive to small spatial shifts, the system is said to be locally spatially shift-variant. This can happen, for instance, if the axial or lateral sampling is insufficient. The Capon beamformer has been shown to provide increased lateral resolution in ultrasound images. Increased lateral resolution should demand denser lateral sampling. However, in previous literature on Capon beamforming for medical ultrasound imaging, only single-frame scenarios have been simulated. Temporal behavior and effects caused by the increased resolution and lack of oversampling have therefore been neglected. In this paper, we analyze the local lateral shift-invariance of the Capon beamformer when imaging moving objects. We show that insufficient lateral sampling makes an imaging system based on the Capon beamformer laterally shift-variant. Different methods for oversampling on transmit and receive are then discussed and investigated to improve on the Capon beamformer. It is shown that lateral shift-invariance can be improved by oversampling based on phase rotation on receive without affecting the acquisition frame rate and with a minor change in processing complexity.
Journal of the Acoustical Society of America | 2013
Jo Inge Buskenes; Jon Petter Åsen; Carl-Inge Colombo Nilsen; Andreas Austeng
The MVDR beamformer has been shown to improve active sonar image quality compared to conventional methods. Unfortunately, it is also significantly more computationally expensive because a spatial covariance matrix must be estimated and inverted for each image pixel. We target this challenge by altering and mapping the MVDR beamformer to a GPU, and suggest three different solutions depending on the system size. For systems with relatively few channels, we suggest arithmetic optimizations for the estimation step, and show how a GPU can be used to yield image creation rates of more than 1 Mpx/s. For larger systems we show that frequency domain processing is preferable, as this promotes high processing rates at a negligible reduction in image quality. These GPU implementations consistently reduced the runtime by 2 orders of magnitude compared to our reference C implementation. For even larger systems we suggest employing the LCA beamformer. It does not calculate a weightset, but merely computes the beamformer...
internaltional ultrasonics symposium | 2012
Jon Petter Åsen; Sverre Holm
The introduction of graphics processing unit (GPU) computing has made it possible to speed up computationally demanding algorithms. One of these algorithms is the calculation of pressure fields from acoustic transducers. Here, the execution time often limits the number of elements and field points we can select in order to get results back in finite time. In this paper we present a simple GPU-based simulator capable of simulating high resolution pressure fields at interactive frame rates. The simulator is based on the same principle as the Ultrasim toolbox, where responses from several point sources are accumulated in a set of observation points, hence solving the Rayleigh-Sommerfeld integral. The cumulative sum for each observation point is independent of all other observation points, making the problem perfect for GPU processing. For the simulator we provide both a Paint-like interface for interactive drawing of uniform linear arrays and free-hand shapes, and a Matlab interface for precise scripting of element positions and observation points. The presented GPU simulator was compared both with a multi threaded C-version, Ultrasim, and Field II. Compared with Ultrasim we report a 400 times speedup when simulating a varying number of source points on a 150 K points observation grid. The test system consisted of a low-end GPU (Nvidia Quadro 600) and an Intel i7-870 2.93 GHz quad-core CPU.
internaltional ultrasonics symposium | 2012
Espen Stene Brønstad; Jon Petter Åsen; Hans Torp; Gabriel Kiss
Direct volume rendering (DVR) has become a widely used technique for visualizing anatomical structures in medical 3D datasets The aim of this study was to locally adapt the opacity transfer function (OTF) in order to improve the results achieved when rendering 3D echocardiographic datasets using DVR. A novel approach for defining locally adaptive OTFs has been tested and adapted to echo data and implemented on the GPU. The local OTF is modeled as a truncated second order polynomial. The algorithm locates significant transitions along the ray profile (feature detection along the ray) in order to estimate an opacity threshold (below which all values are considered transparent) and the steepness of the polynomial for each ray. A reference global OTF and the locally adaptive algorithm have been implemented on a GPU using OpenCL and tested on a dataset of nine 3D echo recordings. The rendering resolution is 512×512×300, while average timing is 28ms, 104ms for the reference and the new method respectively. The locally adaptive OTFs were able to compensate for high variations in tissue (and such reducing wall drop-outs) and blood pool signal (reducing spurious structures inside the cavity). The method depends on a number of user defined parameters, determining these values robustly is subject of ongoing research.
Proceedings of SPIE | 2012
Jon Petter Åsen; Erik Steen; Gabriel Kiss; Anders Thorstensen; Stein Inge Rabben
In this paper we introduce and investigate an adaptive direct volume rendering (DVR) method for real-time visualization of cardiac 3D ultrasound. DVR is commonly used in cardiac ultrasound to visualize interfaces between tissue and blood. However, this is particularly challenging with ultrasound images due to variability of the signal within tissue as well as variability of noise signal within the blood pool. Standard DVR involves a global mapping of sample values to opacity by an opacity transfer function (OTF). While a global OTF may represent the interface correctly in one part of the image, it may result in tissue dropouts, or even artificial interfaces within the blood pool in other parts of the image. In order to increase correctness of the rendered image, the presented method utilizes blood pool statistics to do regional adjustments of the OTF. The regional adaptive OTF was compared with a global OTF in a dataset of apical recordings from 18 subjects. For each recording, three renderings from standard views (apical 4-chamber (A4C), inverted A4C (IA4C) and mitral valve (MV)) were generated for both methods, and each rendering was tuned to the best visual appearance by a physician echocardiographer. For each rendering we measured the mean absolute error (MAE) between the rendering depth buffer and a validated left ventricular segmentation. The difference d in MAE between the global and regional method was calculated and t-test results are reported with significant improvements for the regional adaptive method (dA4C = 1.5 ± 0.3 mm, dIA4C = 2.5 ± 0.4 mm, dMV = 1.7 ± 0.2 mm, d.f. = 17, all p < 0.001). This improvement by the regional adaptive method was confirmed through qualitative visual assessment by an experienced physician echocardiographer who concluded that the regional adaptive method produced rendered images with fewer tissue dropouts and less spurious structures inside the blood pool in the vast majority of the renderings. The algorithm has been implemented on a GPU, running an average of 16 fps with a resolution of 512x512x100 samples (Nvidia GTX460).