Lichao Wang
Technische Universität München
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lichao Wang.
IEEE Transactions on Medical Imaging | 2016
Abhishek Vahadane; Tingying Peng; Amit Sethi; Shadi Albarqouni; Lichao Wang; Maximilian Baust; Katja Steiger; Anna Melissa Schlitter; Irene Esposito; Nassir Navab
Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.
IEEE Transactions on Medical Imaging | 2013
Lichao Wang; Karim Lekadir; Su-Lin Lee; Robert Merrifield; Guang-Zhong Yang
This paper presents an online reinforcement learning framework for medical image segmentation. The concept of context-specific segmentation is introduced such that the model is adaptive not only to a defined objective function but also to the users intention and prior knowledge. Based on this concept, a general segmentation framework using reinforcement learning is proposed, which can assimilate specific user intention and behavior seamlessly in the background. The method is able to establish an implicit model for a large state-action space and generalizable to different image contents or segmentation requirements based on learning in situ. In order to demonstrate the practical value of the method, example applications of the technique to four different segmentation problems are presented. Detailed validation results have shown that the proposed framework is able to significantly reduce user interaction, while maintaining both segmentation accuracy and consistency.
Nature Communications | 2017
Tingying Peng; Kurt Thorn; Timm Schroeder; Lichao Wang; Fabian J. Theis; Carsten Marr; Nassir Navab
Quantitative analysis of bioimaging data is often skewed by both shading in space and background variation in time. We introduce BaSiC, an image correction method based on low-rank and sparse decomposition which solves both issues. In comparison to existing shading correction tools, BaSiC achieves high-accuracy with significantly fewer input images, works for diverse imaging conditions and is robust against artefacts. Moreover, it can correct temporal drift in time-lapse microscopy data and thus improve continuous single-cell quantification. BaSiC requires no manual parameter setting and is available as a Fiji/ImageJ plugin.
medical image computing and computer-assisted intervention | 2014
Tingying Peng; Lichao Wang; Christine Bayer; Sailesh Conjeti; Maximilian Baust; Nassir Navab
Many microscopic imaging modalities suffer from the problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artifacts. A typical example of this is the unwanted seam when stitching images to obtain a whole slide image (WSI). Elimination of shading plays an essential role for subsequent image processing such as segmentation, registration, or tracking. In this paper, we propose two new retrospective shading correction algorithms for WSI targeted to two common forms of WSI: multiple image tiles before mosaicking and an already-stitched image. Both methods leverage on recent achievements in matrix rank minimization and sparse signal recovery. We show how the classic shading problem in microscopy can be reformulated as a decomposition problem of low-rank and sparse components, which seeks an optimal separation of the foreground objects of interest and the background illumination field. Additionally, a sparse constraint is introduced in the Fourier domain to ensure the smoothness of the recovered background. Extensive qualitative and quantitative validation on both synthetic and real microscopy images demonstrates superior performance of the proposed methods in shading removal in comparison with a well-established method in ImageJ.
international symposium on biomedical imaging | 2016
Manish Mishra; Sabine Schmitt; Lichao Wang; Michael Strasser; Carsten Marr; Nassir Navab; Hans Zischka; Tingying Peng
Mitochondrial functions are essential for cell survival. Pathologic situations, e.g. cancer, can impair mitochondrial function which is frequently reflected by an altered morphology. So far, feature description of mitochondrial structure in cancer remains largely qualitative. In this study, we propose a learning-based approach to quantitatively assess the structure of mitochondria isolated from liver tumor cell lines using convolutional neural network (CNN). Besides achieving a high classification accuracy on isolated mitochondria from healthy tissue and different tumor cell lines which the CNN model was trained on, CNN is also able to classify unseen tumor cell lines, which suggests its superior capability to capture the intrinsic structural transition from healthy to tumor mitochondria.
computer assisted radiology and surgery | 2016
Shadi Albarqouni; Ulrich Konrad; Lichao Wang; Nassir Navab; Stefanie Demirci
PurposeX-ray imaging is widely used for guiding minimally invasive surgeries. Despite ongoing efforts in particular toward advanced visualization incorporating mixed reality concepts, correct depth perception from X-ray imaging is still hampered due to its projective nature.MethodsIn this paper, we introduce a new concept for predicting depth information from single-view X-ray images. Patient-specific training data for depth and corresponding X-ray attenuation information are constructed using readily available preoperative 3D image information. The corresponding depth model is learned employing a novel label-consistent dictionary learning method incorporating atlas and spatial prior constraints to allow for efficient reconstruction performance.ResultsWe have validated our algorithm on patient data acquired for different anatomy focus (abdomen and thorax). Of 100 image pairs per each of 6 experimental instances, 80 images have been used for training and 20 for testing. Depth estimation results have been compared to ground truth depth values.ConclusionWe have achieved around
international conference on medical imaging and augmented reality | 2010
Lichao Wang; Karim Lekadir; Ismail EI-Hamamsy; Magdi H. Yacoub; Guang-Zhong Yang
international symposium on biomedical imaging | 2015
Lichao Wang; Vasileios Belagiannis; Carsten Marr; Fabian J. Theis; Guang-Zhong Yang; Nassir Navab
4.40\,\%\,\pm \,2.04
Journal of Robotic Surgery | 2013
Su-Lin Lee; Ka-Wai Kwok; Lichao Wang; Celia V. Riga; Colin Bicknell; Nicholas Cheshire; Guang-Zhong Yang
international conference on machine learning | 2011
Lichao Wang; Su-Lin Lee; Robert Merrifield; Guang-Zhong Yang
4.40%±2.04 and