Dan Koppel
University of California, Santa Barbara
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan Koppel.
Medical Imaging 2007: Visualization and Image-Guided Procedures | 2007
Dan Koppel; Chao-I Chen; Yuan-Fang Wang; Hua Lee; Jia Gu; Allen Poirson; Rolf Wolters
A 3D colon model is an essential component of a computer-aided diagnosis (CAD) system in colonoscopy to assist surgeons in visualization, and surgical planning and training. This research is thus aimed at developing the ability to construct a 3D colon model from endoscopic videos (or images). This paper summarizes our ongoing research in automated model building in colonoscopy. We have developed the mathematical formulations and algorithms for modeling static, localized 3D anatomic structures within a colon that can be rendered from multiple novel view points for close scrutiny and precise dimensioning. This ability is useful for the scenario when a surgeon notices some abnormal tissue growth and wants a close inspection and precise dimensioning. Our modeling system uses only video images and follows a well-established computer-vision paradigm for image-based modeling. We extract prominent features from images and establish their correspondences across multiple images by continuous tracking and discrete matching. We then use these feature correspondences to infer the cameras movement. The camera motion parameters allow us to rectify images into a standard stereo configuration and calculate pixel movements (disparity) in these images. The inferred disparity is then used to recover 3D surface depth. The inferred 3D depth, together with texture information recorded in images, allow us to construct a 3D model with both structure and appearance information that can be rendered from multiple novel view points.
medical image computing and computer assisted intervention | 2001
Dan Koppel; Yuan-Fang Wang; Hua Lee
Video-Endoscopy has proven to be significantly less invasive to the patient. However, it also creates a more complex and difficult operating environment that requires the surgeon to operate through a video interface. Visual feedback control and image interpretation in this operating environment can be troublesome. Automated image analysis has tremendous potential in improving the surgeons visual feedback, resulting in better patient safety, reduced operation time, and savings in health care. In this paper, we present our design of an image rectification algorithm for maintaining the head-up display in video-endoscopy and report some preliminary results.
workshop on applications of computer vision | 2005
Dan Koppel; Yuan-Fang Wang; Hua Lee
This paper presents a unified framework for achieving robust and real-time image stabilization and rectification. While compensating for a small amount of image jitter due to platform vibration and hand tremble is not a very difficult task, canceling a large amount of image jitter, due to significant, long-range, and purposeful camera motion (such as panning, zooming, and rotation), is much more challenging. Our framework selectively compensates for unwanted camera motion to maintain a stable view of the scene. The rectified display has the same information content, but is shown in a much more operator-friendly way. Our contribution is threefold: (1) proposing a unified image rectification algorithm to cancel large and purposeful image motion to achieve a stable display that is applicable for both far-field and near-field image conditions, (2) improving the robustness and real-time performance of these algorithms with extensive validation on real images, and (3) illustrating the potential of these algorithms by applying them to real-world problems in diverse application domains.
International Journal of Imaging Systems and Technology | 2004
Dan Koppel; Yuan-Fang Wang; Hua Lee
Video endoscopy, a modality of minimally invasive surgery, has proven to be significantly less invasive to the patient. However, by relying on a video interface to observe the surgical scene, several nonintuitive effects need to be contended with. One of these is the reduction or removal of the discrepancy in view position and orientation between the surgeon and the endoscope. Eliminating artifacts in the visual feedback is important for the improvement of the effectiveness of the surgical procedures. This article presents a novel rendering technique to produce the scene from a “surgeon‐centric” viewpoint to create greater transparency for optimizing the visual precision of the endoscopic procedures.
international symposium on visual computing | 2008
Chao-I Chen; Dusty Sargent; Chang-Ming Tsai; Yuan-Fang Wang; Dan Koppel
A method for stabilizing the computation of stereo correspondences is presented in this paper. Delaunay triangulation is employed to partition the input images into small, localized regions. Instead of simply assuming that the surface patches viewed from these small triangles are locally planar, we explicitly examine the planarity hypothesis in the 3D space. To perform the planarity test robustly, adjacent triangles are merged into larger polygonal patches first and then the planarity assumption is verified. Once piece-wise planar patches are identified, point correspondences within these patches are readily computed through planar homographies. These point correspondences established by planar homographies serve as the ground control points (GCPs) in the final dynamic programming (DP)-based correspondence matching process. Our experimental results show that the proposed method works well on real indoor, outdoor, and medical image data and is also more efficient than the traditional DP method.
computer vision and pattern recognition | 2008
Dan Koppel; Chang-Ming Tsai; Yuan-Fang Wang
This paper reports a technique that improves the robustness and accuracy in computing dense optical-flow fields. We propose a global formulation with a regularization term. The regularization expressions are derived based on tensor theory and complex analysis. It is shown that while many regularizers have been proposed (image-driven, flow-driven, homogeneous, inhomogeneous, isotropic, anisotropic), they are all variations of a single base expression nablaunablauT + nablavnablavT . These regularizers, strictly speaking, are valid for uniform 2D translational motion only, because what they do essentially is to penalize changes in a flow field. However, many flow patterns-such as rotation, zoom, and their combinations, induced by a 3D rigid-body motion.are not constant. The traditional regularizers then incorrectly penalize these legal flow patterns and result in biased estimates. The purpose of this work is then to derive a new suite of regularization expressions that treat all valid flow patterns resulting from a 3D rigid-body motion equally, without unfairly penalizing any of them.
international symposium on biomedical imaging | 2004
Dan Koppel; Yuan-Fang Wang; Shivkumar Chandrasekaran
We present a behavior simulation algorithm that has the potential of enabling physically-correct, photo-realistic, and real-time behavior simulation for soft tissues and organs. Our approach combines a physically-correct formulation based on boundary element methods with an efficient numeric solver. This approach can have a significant impact in off-line surgical training and simulation, and in on-line computer-assisted surgery.
workshop on applications of computer vision | 2002
Dan Koppel; Yuan-Fang Wang; Hua Lee
Video-endoscopy, a mode of minimally invasive surgery, has proven to be significantly less invasive to the patient. However, it creates a much more complex operation environment that requires the surgeon to operate through a video interface. Visual feedback control and image interpretation can be difficult. Poor visual feedback in video-endoscopy prolongs the operation time, increases the risk to the patient, and drives up the cost of health care. It is a major roadblock in replacing the traditional, highly traumatic open surgical procedures with the much less invasive, more patient friendly video-endoscopy, and in training the surgeons to master this new made of operation. Our research objective is thus to design, code, and validate on real images novel image analysis and rectification algorithms to enhance the visual feedback to the surgeon in video-endoscopy.
Proceedings of SPIE | 2009
Chao-I Chen; Dusty Sargent; Chang-Ming Tsai; Yuan-Fang Wang; Dan Koppel
3D computer models of body anatomy can have many uses in medical research and clinical practices. This paper describes a robust method that uses videos of body anatomy to construct multiple, partial 3D structures and then fuse them to form a larger, more complete computer model using the structure-from-motion framework. We employ the Double Dog-Leg (DDL) method, a trust-region based nonlinear optimization method, to jointly optimize the camera motion parameters (rotation and translation) and determine a global scale that all partial 3D structures should agree upon. These optimized motion parameters are used for constructing local structures, and the global scale is essential for multi-view registration after all these partial structures are built. In order to provide a good initial guess of the camera movement parameters and outlier free 2D point correspondences for DDL, we also propose a two-stage scheme where multi-RANSAC with a normalized eight-point algorithm is first performed and then a few iterations of an over-determined five-point algorithm is used to polish the results. Our experimental results using colonoscopy video show that the proposed scheme always produces more accurate outputs than the standard RANSAC scheme. Furthermore, since we have obtained many reliable point correspondences, time-consuming and error-prone registration methods like the iterative closest points (ICP) based algorithms can be replaced by a simple rigid-body transformation solver when merging partial structures into a larger model.
computer vision and pattern recognition | 2008
Dan Koppel; Shiv Chandrasekaran; Yuan-Fang Wang
In this paper, a method for modeling the deformation behavior of organs and soft tissue is presented. The purpose is to predict the global deformation effect that arbitrary, time-varying external perturbations have on an organ. The perturbation might be caused by an instrument (e.g., through the surgeonpsilas grasping and pinching actions), or it might be from organ-organ, organ-body-wall collisions in a bodily cavity. A methodology, employing (1) a surface representation based on the Boundary-Element Method-or BEM, of the deformation equations and (2) recently developed linear-algebra techniques (known as the ldquoHierarchical Semi-Separablerdquo matrix representation-or HSS), is proposed. We demonstrate that the proposed framework achieves an almost linear time complexity of O(n1.14), a significant speed up comparing to the traditional O(n3) schemes employing brute-force linear-algebra solution methods based on Finite-Element Method (FEM) formulations. Furthermore, unlike some previous approach, no restriction is placed on the external perturbation pattern and how it can change over time.