Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jesper Mosegaard is active.

Publication


Featured researches published by Jesper Mosegaard.


Otology & Neurotology | 2009

The visible ear simulator: a public PC application for GPU-accelerated haptic 3D simulation of ear surgery based on the visible ear data.

Mads Sølvsten Sørensen; Jesper Mosegaard; Peter Trier

Background: Existing virtual simulators for middle ear surgery are based on 3-dimensional (3D) models from computed tomographic or magnetic resonance imaging data in which image quality is limited by the lack of detail (maximum, ∼50 voxels/mm3), natural color, and texture of the source material. Virtual training often requires the purchase of a program, a customized computer, and expensive peripherals dedicated exclusively to this purpose. Materials and Methods: The Visible Ear freeware library of digital images from a fresh-frozen human temporal bone was segmented, and real-time volume rendered as a 3D model of high-fidelity, true color, and great anatomic detail and realism of the surgically relevant structures. A haptic drilling model was developed for surgical interaction with the 3D model. Results: Realistic visualization in high-fidelity (∼125 voxels/mm3) and true color, 2D, or optional anaglyph stereoscopic 3D was achieved on a standard Core 2 Duo personal computer with a GeForce 8,800 GTX graphics card, and surgical interaction was provided through a relatively inexpensive (∼


ieee virtual reality conference | 2005

GPU accelerated surgical simulators for complex morphology

Jesper Mosegaard; Thomas Sangild Sørensen

2,500) Phantom Omni haptic 3D pointing device. Conclusion: This prototype is published for download (∼120 MB) as freeware at http://www.alexandra.dk/ves/index.htm. With increasing personal computer performance, future versions may include enhanced resolution (up to 8,000 voxels/mm3) and realistic interaction with deformable soft tissue components such as skin, tympanic membrane, dura, and cholesteatomas-features some of which are not possible with computed tomographic-/magnetic resonance imaging-based systems.


technical symposium on computer science education | 2003

Teaching programming to liberal arts students: a narrative media approach

Peter Bøgh Andersen; Jens Bennedsen; Steffen Brandorff; Michael E. Caspersen; Jesper Mosegaard

Surgical training in virtual environments, surgical simulation in other words, has previously had difficulties in simulating deformation of complex morphology in real-time. Even fast spring-mass based systems had slow convergence rates for large models. This paper presents two methods to accelerate a spring-mass system in order to simulate a complex organ such as the heart. Computations are accelerated by taking advantage of modern graphics processing units (GPUs). Two GPU implementations are presented. They vary in their generality of spring connections and in the speedup factor they achieve.


eurographics | 2005

Real-time deformation of detailed geometry based on mappings to a less detailed physical simulation on the GPU

Jesper Mosegaard; Thomas Sangild Sørensen

In this paper we present a new learning environment to be used in an introductory programming course for students that are non-majors in computer science, more precisely for multimedia students with a liberal arts background.Media-oriented programming adds new requirements to the craft of programming (e.g. aesthetic and communicative).We argue that multimedia students with a liberal arts background need programming competences because programmability is the defining characteristic of the computer medium. We compare programming with the creation of traditional media products and identify two important differences which give rise to extra competences needed by multimedia designers as opposed to traditional media product designers. We analyze the development process of multimedia products in order to incorporate this in the learning process, and based on this we present our vision for a new learning environment for an introductory programming course for multimedia students.We have designed a learning environment called Lingoland with the new skills of media programming in mind that hopefully can help alleviate the problems we have experienced in teaching programming to liberal arts students.


ISBMS'06 Proceedings of the Third international conference on Biomedical Simulation | 2006

An introduction to GPU accelerated surgical simulation

Thomas Sangild Sørensen; Jesper Mosegaard

Modern graphics processing units (GPUs) can be effectively used to solve physical systems. To use the GPU optimally, the discretization of the physical system is often restricted to a regular grid. When grid values represent spatial positions, a direct visualization can result in a jagged appearance. In this paper we propose to decouple computation and visualization of such systems. We define mappings that enable the deformation of a high-resolution surface based on a physical simulation on a lower resolution uniform grid. More specifically we investigate new approaches for the visualization of a GPU based spring-mass simulation.


Journal of Alzheimer's Disease | 2015

Quantitative Neuroimaging Software for Clinical Assessment of Hippocampal Volumes on MR Imaging

Jamila Ahdidan; Cyrus A. Raji; Edgar A. DeYoe; Jedidiah Mathis; Karsten Østergaard Noe; Jens Rimestad; Thomas Kjeldsen; Jesper Mosegaard; James T. Becker; Oscar L. Lopez

Modern graphics processing units (GPUs) have recently become fully programmable. Thus a powerful and cost-efficient new computational platform for surgical simulations has emerged. A broad selection of publications has shown that scientific computations obtain a significant speedup if ported from the CPU to the GPU. To take advantage of the GPU however, one must understand the limitations inherent in its design and devise algorithms accordingly. We have observed that many researchers with experience in surgical simulation find this a significant hurdle to overcome. To facilitate the transition from CPU- to GPU-based simulations, we review the most important concepts and data structures required to realise two popular deformable models on the GPU: the finite element model and the spring-mass model


internaltional ultrasonics symposium | 2014

Implementation of synthetic aperture imaging on a hand-held device

Martin Christian Hemmsen; Thomas Kjeldsen; Lee Lassen; Carsten Kjær; Borislav Tomov; Jesper Mosegaard; Jørgen Arendt Jensen

Background: Multiple neurological disorders including Alzheimer’s disease (AD), mesial temporal sclerosis, and mild traumatic brain injury manifest with volume loss on brain MRI. Subtle volume loss is particularly seen early in AD. While prior research has demonstrated the value of this additional information from quantitative neuroimaging, very few applications have been approved for clinical use. Here we describe a US FDA cleared software program, NeuroreaderTM, for assessment of clinical hippocampal volume on brain MRI. Objective: To present the validation of hippocampal volumetrics on a clinical software program. Method: Subjects were drawn (n = 99) from the Alzheimer Disease Neuroimaging Initiative study. Volumetric brain MR imaging was acquired in both 1.5 T (n = 59) and 3.0 T (n = 40) scanners in participants with manual hippocampal segmentation. Fully automated hippocampal segmentation and measurement was done using a multiple atlas approach. The Dice Similarity Coefficient (DSC) measured the level of spatial overlap between NeuroreaderTM and gold standard manual segmentation from 0 to 1 with 0 denoting no overlap and 1 representing complete agreement. DSC comparisons between 1.5 T and 3.0 T scanners were done using standard independent samples T-tests. Results: In the bilateral hippocampus, mean DSC was 0.87 with a range of 0.78–0.91 (right hippocampus) and 0.76–0.91 (left hippocampus). Automated segmentation agreement with manual segmentation was essentially equivalent at 1.5 T (DSC = 0.879) versus 3.0 T (DSC = 0.872). Conclusion: This work provides a description and validation of a software program that can be applied in measuring hippocampal volume, a biomarker that is frequently abnormal in AD and other neurological disorders.


internaltional ultrasonics symposium | 2015

Implementation of real-time duplex synthetic aperture ultrasonography

Martin Christian Hemmsen; Lee Lassen; Thomas Kjeldsen; Jesper Mosegaard; Jørgen Arendt Jensen

This paper presents several implementations of Synthetic Aperture Sequential Beamforming (SASB) on commercially available hand-held devices. The implementations include real-time wireless reception of ultrasound radio frequency signals and GPU processing for B-mode imaging. The proposed implementation demonstrates that SASB can be executed in-time for real-time ultrasound imaging. The wireless communication between probe and processing device satisfies the required bandwidth for real-time data transfer with current 802.11ac technology. The implementation is evaluated using four different hand-held devices all with different chipsets and a BK Medical UltraView 800 ultrasound scanner emulating a wireless probe. The wireless transmission is benchmarked using an imaging setup consisting of 269 scan lines × 1472 complex samples (1.58 MB pr. frame, 16 frames per second). The measured data throughput reached an average of 28.8 MB/s using a LG G2 mobile device, which is more than the required data throughput of 25.3 MB/s. Benchmarking the processing performance for B-mode imaging showed a total processing time of 18.9 ms (53 frames/s), which is less than the acquisition time (62.5 ms).


internaltional ultrasonics symposium | 2014

Synthetic Aperture Sequential Beamforming implemented on multi-core platforms

Thomas Kjeldsen; Lee Lassen; Martin Christian Hemmsen; Carsten Kjær; Borislav Gueorguiev Tomov; Jesper Mosegaard; Jørgen Arendt Jensen

This paper presents a real-time duplex synthetic aperture imaging system, implemented on a commercially available tablet. This includes real-time wireless reception of ultrasound signals and GPU processing for B-mode and Color Flow Imaging (CFM). The objective of the work is to investigate the implementation complexity and processing demands. The image processing is performed using the principle of Synthetic Aperture Sequential Beamforming (SASB) and the flow estimator is implemented using the cross-correlation estimator. Results are evaluated using a HTC Nexus 9 tablet and a BK Medical BK3000 ultrasound scanner emulating a wireless probe. The duplex imaging setup consists of interleaved B-mode and CFM frames. The required data throughput for real-time imaging is 36.1 MB/s. The measured data throughput peaked at 39.562 MB/s, covering the requirement for real-time data transfer and overhead in the TCP/IP protocol. Benchmarking of real-time imaging showed a total processing time of 25.7 ms (39 frames/s) which is less than the acquisition time (29.4 ms). In conclusion, the proposed implementation demonstrates that both B-mode and CFM can be executed in-time for real-time ultrasound imaging and that the required bandwidth between the probe and processing unit is within the current Wi-Fi standards.


high performance graphics | 2011

SSLPV: subsurface light propagation volumes

Jesper Børlum; Brian Bunch Christensen; Thomas Kjeldsen; Peter Trier Mikkelsen; Karsten Østergaard Noe; Jens Rimestad; Jesper Mosegaard

This paper compares several computational approaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real-time with a significant headroom for post-processing. The CPU implementations are optimized using Single Instruction Multiple Data (SIMD) instruction extensions and multithreading, and the GPU computations are performed using the APIs, OpenCL and OpenGL. The implementations include refocusing (dynamic focusing) of a set of fixed focused scan lines received from a BK Medical UltraView 800 scanner and subsequent image processing for B-mode imaging and rendering to screen. The benchmarking is performed using a clinically evaluated imaging setup consisting of 269 scan lines × 1472 complex samples (1.58 MB per frame, 16 frames per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s.

Collaboration


Dive into the Jesper Mosegaard's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jørgen Arendt Jensen

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar

Gerald Greil

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Borislav Tomov

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tommaso Di Ianni

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge