Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lisa Tang is active.

Publication


Featured researches published by Lisa Tang.


IEEE Transactions on Medical Imaging | 2016

Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation

Tom Brosch; Lisa Tang; Youngjin Yoo; David Li; Anthony Traboulsee; Roger C. Tam

We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.


medical image computing and computer assisted intervention | 2015

Deep Convolutional Encoder Networks for Multiple Sclerosis Lesion Segmentation

Tom Brosch; Youngjin Yoo; Lisa Tang; David Li; Anthony Traboulsee; Roger C. Tam

We propose a novel segmentation approach based on deep convolutional encoder networks and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that has both convolutional and deconvolutional layers, and combines feature extraction and segmentation prediction in a single model. The joint training of the feature extraction and prediction layers allows the model to automatically learn features that are optimized for accuracy for any given combination of image types. In contrast to existing automatic feature learning approaches, which are typically patch-based, our model learns features from entire images, which eliminates patch selection and redundant calculations at the overlap of neighboring patches and thereby speeds up the training. Our network also uses a novel objective function that works well for segmenting underrepresented classes, such as MS lesions. We have evaluated our method on the publicly available labeled cases from the MS lesion segmentation challenge 2008 data set, showing that our method performs comparably to the state-of-theart. In addition, we have evaluated our method on the images of 500 subjects from an MS clinical trial and varied the number of training samples from 5 to 250 to show that the segmentation performance can be greatly improved by having a representative data set.


medical image computing and computer assisted intervention | 2008

Simulation of Ground-Truth Validation Data Via Physically- and Statistically-Based Warps

Ghassan Hamarneh; Preet Jassi; Lisa Tang

The problem of scarcity of ground-truth expert delineations of medical image data is a serious one that impedes the training and validation of medical image analysis techniques. We develop an algorithm for the automatic generation of large databases of annotated images from a single reference dataset. We provide a web-based interface through which the users can upload a reference data set (an image and its corresponding segmentation and landmark points), provide custom setting of parameters, and, following server-side computations, generate and download an arbitrary number of novel ground-truth data, including segmentations, displacement vector fields, intensity non-uniformity maps, and point correspondences. To produce realistic simulated data, we use variational (statistically-based) and vibrational (physically-based) spatial deformations, nonlinear radiometric warps mimicking imaging nonhomogeneity, and additive random noise with different underlying distributions. We outline the algorithmic details, present sample results, and provide the web address to readers for immediate evaluation and usage.


workshop on biomedical image registration | 2010

Reliability-driven, spatially-adaptive regularization for deformable registration

Lisa Tang; Ghassan Hamarneh; Rafeef Abugharbieh

We propose a reliability measure that identifies informative image cues useful for registration, and present a novel, data-driven approach to spatially adapt regularization to the local image content via use of the proposed measure. We illustrate the generality of this adaptive regularization approach within a powerful discrete optimization framework and present various ways to construct a spatially varying regularization weight based on the proposed measure. We evaluate our approach within the registration process using synthetic experiments and demonstrate its utility in real applications. As our results demonstrate, our approach yielded higher registration accuracy than non-adaptive approaches and the proposed reliability measure performed robustly even in the presences of noise and intensity inhomogenity.


international conference on machine learning | 2013

Improving Probabilistic Image Registration via Reinforcement Learning and Uncertainty Evaluation

Tayebeh Lotfi; Lisa Tang; Shawn Andrews; Ghassan Hamarneh

One framework for probabilistic image registration involves assigning probability distributions over spatial transformations (e.g. distributions over displacement vectors at each voxel). In this paper, we propose an uncertainty measure for these distributions that examines the actual spatial displacements, thus departing from the classical Shannon entropy-based measures, which examine only the probabilities of these distributions. We show that by incorporating the proposed uncertainty measure, along with features extracted from the input images and intermediate displacement fields, we are able to more accurately predict the pointwise registration errors of an intermediate solution as estimated for a previously unseen input image pair. We utilize the predicted errors to identify regions in the image that are trustworthy and through which we refine the tentative registration solution. Results show that our proposed framework, which incorporates uncertainty estimation and registration error prediction, can improve accuracy of 3D image registrations by about 25%.


Medical Image Analysis | 2012

Tongue contour tracking in dynamic ultrasound via higher-order MRFs and efficient fusion moves

Lisa Tang; Tim Bressmann; Ghassan Hamarneh

Analyses of the human tongue motion as captured from 2D dynamic ultrasound data often requires segmentation of the mid-sagittal tongue contours. However, semi-automatic extraction of the tongue shape presents practical challenges. We approach this segmentation problem by proposing a novel higher-order Markov random field energy minimization framework. For efficient energy minimization, we propose two novel schemes to sample the solution space efficiently. To cope with the unpredictable tongue motion dynamics, we also propose to temporally adapt regularization based on contextual information. Unlike previous methods, we employ the latest optimization techniques to solve the tracking problem under one unified framework. Our method was validated on a set of 63 clinical data sequences, which allowed for comparative analyses with three other competing methods. Experimental results demonstrate that our method can segment sequences containing over 500 frames with mean accuracy of 3mm, approaching the accuracy of manual segmentations created by trained clinical observers.


medical image computing and computer assisted intervention | 2013

Random Walks with Efficient Search and Contextually Adapted Image Similarity for Deformable Registration

Lisa Tang; Ghassan Hamarneh

We develop a random walk-based image registration method that incorporates two novelties: 1) a progressive optimization scheme that conducts the solution search efficiently via a novel use of information derived from the obtained probabilistic solution, and 2) a data-likelihood re-weighting step that contextually performs feature selection in a spatially adaptive manner so that the data costs are based primarily on trusted information sources. Synthetic experiments on three public datasets of different anatomical regions and modalities showed that our method performed efficient search without sacrificing registration accuracy. Experiments performed on 60 real brain image pairs from a public dataset also demonstrated our methods better performance over existing non-probabilistic image registration methods.


Physics in Medicine and Biology | 2010

Complexity and accuracy of image registration methods in SPECT-guided radiation therapy.

L Yin; Lisa Tang; Ghassan Hamarneh; Brad Gill; Anna Celler; Sergey Shcherbinin; Tsien-Fei Fua; Anna Thompson; Mitchell Liu; C Duzenli; Finbar Sheehan; Vitali Moiseenko

The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.


computer vision and pattern recognition | 2008

SMRFI: Shape matching via registration of vector-valued feature images

Lisa Tang; Ghassan Hamarneh

We perform shape matching by transforming the problem of establishing shape correspondences into an image registration problem. At each vertex on the shape, we calculate a shape feature and encode this feature as image intensity at appropriate positions in the image domain. Calculating multiple features at each vertex and encoding them into the image domain results in a vector-valued feature image. Establishing point correspondence between two shapes is thereafter treated as a registration problem of two vector valued feature images. With this shape representation, various existing image registration strategies can now be easily applied. These include the use of a scale-space approach to diffuse the shape features, a coarse-to-fine registration scheme, and various deformable registration algorithms. As our validation shows, by representing shapes as vector valued images, the overall method is robust against noise and occlusions. To this end, we have successfully established 2D point correspondences of shapes of corpora callosa, vertebrae, and brain ventricles.


international symposium on signal processing and information technology | 2006

Co-registration of Bone CT and SPECT Images Using Mutual Information

Lisa Tang; Ghassan Hamarneh; Anna Celler

We present an automatic and accurate technique for 3D co-registration of SPECT and CT. The method allows the attenuation correction of SPECT images and fusion of the anatomic details from CT and the functional information from SPECT. Registration was achieved by optimizing the mutual information metric over the parameter space defined by the translation and rotation parameters. To improve the robustness and accuracy of the algorithm, registration was performed in a coarse-to-fine manner. We applied the algorithm on three clinical data sets originating from 1 pelvic and 2 thoracic studies. Validation was done by inspecting the 2D and 3D fusion of the registered images and by observing the convergence in the metric and the transformation parameters. We also evaluated quantitatively the effects of the choice of the parameters, the number of multiresolution levels, and initial misalignment of the paired volumes. Registration of both studies converged close to a final alignment with a maximum translational error of 1.41 mm plusmn 0.78 mm and rotational error of 1.21deg plusmn 0.46deg for the thoracic study and a maximum translational error of 1.96 mm plusmn 1.27 mm and rotational error of 0.57deg plusmn 0.34deg for the pelvic studies. The average computation time on a 3.0 GHz PC was < 4 minutes for the entire registration procedure. We conclude that the algorithm had successfully co-registered the CT and SPECT images

Collaboration


Dive into the Lisa Tang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roger C. Tam

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Anthony Traboulsee

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

David Li

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Youngjin Yoo

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Anna Celler

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Tom Brosch

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Shannon H. Kolind

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yue Wang

Simon Fraser University

View shared research outputs
Researchain Logo
Decentralizing Knowledge