Daphne Yu
Siemens
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daphne Yu.
Proceedings of SPIE, the International Society for Optical Engineering | 2008
Zhe Fan; Christoph Vetter; Christoph Guetter; Daphne Yu; Rüdiger Westermann; Arie E. Kaufman; Chenyang Xu
Non-rigid multi-modal volume registration is computationally intensive due to its high-dimensional parameter space, where common CPU computation times are several minutes. Medical imaging applications using registration, however, demand ever faster implementations for several purposes: matching the data acquisition speed, providing smooth user interaction and steering for quality control, and performing population registration involving multiple datasets. Current GPUs offer an opportunity to boost the registration speed through high computational power at low cost. In our previous work, we have presented a GPU implementation of a non-rigid multi-modal volume registration that was 6 - 8 times faster than a software implementation. In this paper, we extend this work by describing how new features of the DX10-compatible GPUs and additional optimization strategies can be employed to further improve the algorithm performance. We have compared our optimized version with the previous version on the same GPU, and have observed a speedup factor of 3.6. Compared with the software implementation, we achieve a speedup factor of up to 44.
Physics in Medicine and Biology | 2018
Faisal Mahmood; Richard Chen; Sandra Sudarsky; Daphne Yu; Nicholas J. Durr
Deep learning has emerged as a powerful artificial intelligence tool to interpret medical images for a growing variety of applications. However, the paucity of medical imaging data with high-quality annotations that is necessary for training such methods ultimately limits their performance. Medical data is challenging to acquire due to privacy issues, shortage of experts available for annotation, limited representation of rare conditions and cost. This problem has previously been addressed by using synthetically generated data. However, networks trained on synthetic data often fail to generalize to real data. Cinematic rendering simulates the propagation and interaction of light passing through tissue models reconstructed from CT data, enabling the generation of photorealistic images. In this paper, we present one of the first applications of cinematic rendering in deep learning, in which we propose to fine-tune synthetic data-driven networks using cinematically rendered CT data for the task of monocular depth estimation in endoscopy. Our experiments demonstrate that: (a) convolutional neural networks (CNNs) trained on synthetic data and fine-tuned on photorealistic cinematically rendered data adapt better to real medical images and demonstrate more robust performance when compared to networks with no fine-tuning, (b) these fine-tuned networks require less training data to converge to an optimal solution, and (c) fine-tuning with data from a variety of photorealistic rendering conditions of the same scene prevents the network from learning patient-specific information and aids in generalizability of the model. Our empirical evaluation demonstrates that networks fine-tuned with cinematically rendered data predict depth with 56.87% less error for rendered endoscopy images and 27.49% less error for real porcine colon endoscopy images.
Archive | 2007
Jeffrey P. Johnson; John Patrick Collins; Mariappan S. Nadar; John S. Nafziger; Thomas Stingl; Daphne Yu
Archive | 2007
Daphne Yu; Robert Schneider
Archive | 2005
Gianluca Paladini; Daphne Yu
Archive | 2007
Daphne Yu; Jeffrey P. Johnson; Mariappan S. Nadar; John S. Nafziger; Thomas Stingl
Archive | 2007
Wei Li; Daphne Yu
Archive | 2004
Daphne Yu
Archive | 2012
Daphne Yu; Wei Li; Feng Qiu
Archive | 2007
Matthieu Dederichs; Klaus Engel; Daphne Yu