Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Terrence Chen is active.

Publication


Featured researches published by Terrence Chen.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

Total variation models for variable lighting face recognition

Terrence Chen; Wotao Yin; Xiang Sean Zhou; Dorin Comaniciu; Thomas S. Huang

In this paper, we present the logarithmic total variation (LTV) model for face recognition under varying illumination, including natural lighting conditions, where we rarely know the strength, direction, or number of light sources. The proposed LTV model has the ability to factorize a single face image and obtain the illumination invariant facial structure, which is then used for face recognition. Our model is inspired by the SQI model but has better edge-preserving ability and simpler parameter selection. The merit of this model is that neither does it require any lighting assumption nor does it need any training. The LTV model reaches very high recognition rates in the tests using both Yale and CMU PIE face databases as well as a face database containing 765 subjects under outdoor lighting conditions


computer vision and pattern recognition | 2005

Illumination normalization for face recognition and uneven background correction using total variation based image models

Terrence Chen; Wotao Yin; Xiang Sean Zhou; Dorin Comaniciu; Thomas S. Huang

We present a new algorithm for illumination normalization and uneven background correction in images, utilizing the recently proposed TV+L/sup 1/ model: minimizing the total variation of the output cartoon while subject to an L/sup 1/-norm fidelity term. We give intuitive proofs of its main advantages, including the well-known edge preserving capability, minimal signal distortion, and scale-dependent but intensity-independent foreground extraction. We then propose a novel TV-based quotient image model (TVQI) for illumination normalization, an important preprocessing for face recognition under different lighting conditions. Using this model, we achieve 100% face recognition rate on Yale face database B if the reference images are under good lighting condition and 99.45% if not. These results, compared to the average 65% recognition rate of the quotient image model and the average 95% recognition rate of the more recent self quotient image model, show a clear improvement. In addition, this model requires no training data, no assumption on the light source, and no alignment between different images for illumination normalization. We also present the results of the related applications - uneven background correction for cDNA mic roar ray films and digital microscope images. We believe the proposed works can serve important roles in the related fields.


computer vision and pattern recognition | 2011

Learning-based hypothesis fusion for robust catheter tracking in 2D X-ray fluoroscopy

Wen Wu; Terrence Chen; Peng Wang; Shaohua Kevin Zhou; Dorin Comaniciu; Adrian Barbu; Norbert Strobel

Catheter tracking has become more and more important in recent interventional applications. It provides real time navigation for the physicians and can be used to control a motion compensated fluoro overlay reference image for other means of guidance, e.g. involving a 3D anatomical model. Tracking the coronary sinus (CS) catheter is effective to compensate respiratory and cardiac motion for 3D overlay navigation to assist positioning the ablation catheter in Atrial Fibrillation (Afib) treatments. During interventions, the CS catheter performs rapid motion and non-rigid deformation due to the beating heart and respiration. In this paper, we model the CS catheter as a set of electrodes. Novelly designed hypotheses generated by a number of learning-based detectors are fused. Robust hypothesis matching through a Bayesian framework is then used to select the best hypothesis for each frame. As a result, our tracking method achieves very high robustness against challenging scenarios such as low SNR, occlusion, foreshortening, non-rigid deformation, as well as the catheter moving in and out of ROI. Quantitative evaluation has been conducted on a database of 13221 frames from 1073 sequences. Our approach obtains 0.50mm median error and 0.76mm mean error. 97.8% of evaluated data have errors less than 2.00mm. The speed of our tracking algorithm reaches 5 frames-per-second on most data sets. Our approach is not limited to the catheters inside the CS but can be extended to track other types of catheters, such as ablation catheters or circumferential mapping catheters.


medical image computing and computer assisted intervention | 2012

Ultrasound and fluoroscopic images fusion by autonomous ultrasound probe detection

Peter Mountney; Razvan Ioan Ionasec; Markus Kaizer; Sina Mamaghani; Wen Wu; Terrence Chen; Matthias John; Jan Boese; Dorin Comaniciu

New minimal-invasive interventions such as transcatheter valve procedures exploit multiple imaging modalities to guide tools (fluoroscopy) and visualize soft tissue (transesophageal echocardiography (TEE)). Currently, these complementary modalities are visualized in separate coordinate systems and on separate monitors creating a challenging clinical workflow. This paper proposes a novel framework for fusing TEE and fluoroscopy by detecting the pose of the TEE probe in the fluoroscopic image. Probe pose detection is challenging in fluoroscopy and conventional computer vision techniques are not well suited. Current research requires manual initialization or the addition of fiducials. The main contribution of this paper is autonomous six DoF pose detection by combining discriminative learning techniques with a fast binary template library. The pose estimation problem is reformulated to incrementally detect pose parameters by exploiting natural invariances in the image. The theoretical contribution of this paper is validated on synthetic, phantom and in vivo data. The practical application of this technique is supported by accurate results (< 5 mm in-plane error) and computation time of 0.5s.


medical image computing and computer assisted intervention | 2011

Image-based device tracking for the co-registration of angiography and intravascular ultrasound images

Peng Wang; Terrence Chen; Olivier Ecabert; Simone Prummer; Martin Ostermeier; Dorin Comaniciu

The accurate and robust tracking of catheters and transducers employed during image-guided coronary intervention is critical to improve the clinical workflow and procedure outcome. Image-based device detection and tracking methods are preferred due to the straightforward integration into existing medical equipments. In this paper, we present a novel computational framework for image-based device detection and tracking applied to the co-registration of angiography and intravascular ultrasound (IVUS), two modalities commonly used in interventional cardiology. The proposed system includes learning-based detections, model-based tracking, and registration using the geodesic distance. The system receives as input the selection of the coronary branch under investigation in a reference angiography image. During the subsequent pullback of the IVUS transducers, the system automatically tracks the position of the medical devices, including the IVUS transducers and guiding catheter tips, under fluoroscopy imaging. The localization of IVUS transducers and guiding catheter tips is used to continuously associate an IVUS imaging plane to the vessel branch under investigation. We validated the system on a set of 65 clinical cases, with high accuracy (mean errors less than 1.5mm) and robustness (98.46% success rate). To our knowledge, this is the first reported system able to automatically establish a robust correspondence between the angiography and IVUS images, thus providing clinicians with a comprehensive view of the coronaries.


international conference on computer vision | 2009

Automatic ovarian follicle quantification from 3D ultrasound data using global/local context with database guided segmentation

Terrence Chen; Wei Zhang; Sara Good; Kevin S. Zhou; Dorin Comaniciu

In this paper, we present a novel probabilistic framework for automatic follicle quantification in 3D ultrasound data. The proposed framework robustly estimates size and location of each individual ovarian follicle by fusing the information from both global and local context. Follicle candidates at detected locations are then segmented by a novel database guided segmentation method. To efficiently search hypothesis in a high dimensional space for multiple object detection, a clustered marginal space learning approach is introduced. Extensive evaluations conducted on 501 volumes containing 8108 follicles showed that our method is able to detect and segment ovarian follicles with high robustness and accuracy. It is also much faster than the current ultrasound manual workflow. The proposed method is able to streamline the clinical workflow and improve the accuracy of existing follicular measurements.


Pattern Recognition Letters | 2015

Robust object tracking using semi-supervised appearance dictionary learning

Lei Zhang; Wen Wu; Terrence Chen; Norbert Strobel; Dorin Comaniciu

A novel online semi-supervised dictionary update method is proposed.Cooperated with a pre-trained SVM, the proposed method can dynamically update the appearance dictionary for object tracking.We integrate the proposed dictionary learning method and a sparse-based tracker into a tracking framework.Results have shown that the proposed framework can outperform several state-of-the-art tracking methods even when drastic appearance variations happen. It is a challenging task to develop robust object tracking methods to overcome dynamic object appearance and background changes. Online learning-based methods have been widely applied to cope with the challenges. However, online methods suffer from the problem of drifting. Sparse appearance representation has recently shown promising object tracking results. However, it lacks of information update to accurately track objects in long sequences or when object appearance drastically changes. In this paper, we propose a novel framework for tracking objects using a semi-supervised appearance dictionary learning method. Firstly, an object appearance dictionary is learned on the initial frame. Secondly, a graph model is employed in the proposed method for learning new bases when detecting object appearance change. The selected bases automatically replace the current rarely used bases. The proposed method is quantitatively compared with state-of-the-art methods on several challenging data sets. Results have shown that our proposed framework outperforms other methods even when drastic appearance variations happen.


Proceedings of SPIE | 2009

User-constrained guidewire localization in fluoroscopy

Philippe Mazouer; Terrence Chen; Ying Zhu; Peng Wang; Peter Durlak; Jean-Philippe Thiran; Dorin Comaniciu

In this paper we present a learning-based guidewire localization algorithm which can be constrained by user inputs. The proposed algorithm automatically localizes guidewires in fluoroscopic images. In cases where the results are not satisfactory, the user can provide input to constrain the algorithm by clicking on the guidewire segment missed by the detection algorithm. The algorithm then re-localizes the guidewire and updates the result in less than 0.3 second. In extreme cases, more constraints can be provided until a satisfactory result is reached. The proposed algorithm can not only serve as an efficient initialization tool for guidewire tracking, it can also serve as an efficient annotation tool, either for cardiologists to mark the guidewire, or to build up a labeled database for evaluation. Through the improvement of the initialization of guidewire tracking, it also helps to improve the visibility of the guidewire during interventional procedures. Our study shows that even highly complicated guidewires can mostly be localized within 5 seconds by less than 6 clicks.


pacific rim conference on multimedia | 2003

Feature design in soccer video indexing

Mei Han; Wei Hua; Terrence Chen; Yihong Gong

Unlike American football, baseball, tennis and many other sports games, soccer is not a well-structured game. Soccer videos are basically continuous streams with exciting highlights embedded. However, the highlights of the same type are correlated in spatial and temporal feature distributions. In this paper we present an effective scheme to represent soccer scenes with low/mid-level image and sound features. We discuss three aspects of feature design in soccer video indexing system: temporal structure, low/mid-level features, domain specific knowledge. We use the maximum-entropy based machine learning method as a test platform to verify the feature design scheme. The maximum-entropy method can automatically choose the features with more distinguishing power. The feature representation is applied to soccer video indexing. Extensive experiments are conducted and satisfying results are described.


IEEE Transactions on Medical Imaging | 2013

Image-based Co-Registration of Angiography and Intravascular Ultrasound Images

Peng Wang; Olivier Ecabert; Terrence Chen; Michael Wels; Johannes Rieber; Martin Ostermeier; Dorin Comaniciu

In image-guided cardiac interventions, X-ray imaging and intravascular ultrasound (IVUS) imaging are two often used modalities. Interventional X-ray images, including angiography and fluoroscopy, are used to assess the lumen of the coronary arteries and to monitor devices in real time. IVUS provides rich intravascular information, such as vessel wall composition, plaque, and stent expansions, but lacks spatial orientations. Since the two imaging modalities are complementary to each other, it is highly desirable to co-register the two modalities to provide a comprehensive picture of the coronaries for interventional cardiologists. In this paper, we present a solution for co-registering 2-D angiography and IVUS through image-based device tracking. The presented framework includes learning-based vessel detection and device detections, model-based tracking, and geodesic distance-based registration. The system first interactively detects the coronary branch under investigation in a reference angiography image. During the pullback of the IVUS transducers, the system acquires both ECG-triggered fluoroscopy and IVUS images, and automatically tracks the position of the medical devices in fluoroscopy. The localization of tracked IVUS transducers and guiding catheter tips is used to associate an IVUS imaging plane to a corresponding location on the vessel branch under investigation. The presented image-based solution can be conveniently integrated into existing cardiology workflow. The system is validated with a set of clinical cases, and achieves good accuracy and robustness.

Collaboration


Dive into the Terrence Chen's collaboration.

Researchain Logo
Decentralizing Knowledge