Daisuke Deguchi
Nagoya University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daisuke Deguchi.
Medical Image Analysis | 2002
Kensaku Mori; Daisuke Deguchi; Jun Sugiyama; Yasuhito Suenaga; Jun-ichiro Toriwaki; Calvin R. Maurer; Hirotsugu Takabatake; Hiroshi Natori
This paper describes a method for tracking the camera motion of a flexible endoscope, in particular a bronchoscope, using epipolar geometry analysis and intensity-based image registration. The method proposed here does not use a positional sensor attached to the endoscope. Instead, it tracks camera motion using real endoscopic (RE) video images obtained at the time of the procedure and X-ray CT images acquired before the endoscopic examination. A virtual endoscope system (VES) is used for generating virtual endoscopic (VE) images. The basic idea of this tracking method is to find the viewpoint and view direction of the VES that maximizes a similarity measure between the VE and RE images. To assist the parameter search process, camera motion is also computed directly from epipolar geometry analysis of the RE video images. The complete method consists of two steps: (a) rough estimation using epipolar geometry analysis and (b) precise estimation using intensity-based image registration. In the rough registration process, the method computes camera motion from optical flow patterns between two consecutive RE video image frames using epipolar geometry analysis. In the image registration stage, we search for the VES viewing parameters that generate the VE image that is most similar to the current RE image. The correlation coefficient and the mean square intensity difference are used for measuring image similarity. The result obtained in the rough estimation process is used for restricting the parameter search area. We applied the method to bronchoscopic video image data from three patients who had chest CT images. The method successfully tracked camera motion for about 600 consecutive frames in the best case. Visual inspection suggests that the tracking is sufficiently accurate for clinical use. Tracking results obtained by performing the method without the epipolar geometry analysis step were substantially worse. Although the method required about 20 s to process one frame, the results demonstrate the potential of image-based tracking for use in an endoscope navigation system.
Medical Image Analysis | 2009
Daisuke Deguchi; Kensaku Mori; Marco Feuerstein; Takayuki Kitasaka; Calvin R. Maurer; Yasuhito Suenaga; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori
We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.
medical image computing and computer assisted intervention | 2005
Kensaku Mori; Daisuke Deguchi; Kenta Akiyama; Takayuki Kitasaka; Calvin R. Maurer; Yasuhito Suenaga; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori
In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz.
Medical Image Analysis | 2012
Xióngbiāo Luó; Marco Feuerstein; Daisuke Deguchi; Takayuki Kitasaka; Hirotsugu Takabatake; Kensaku Mori
This paper presents a new hybrid camera motion tracking method for bronchoscopic navigation combining SIFT, epipolar geometry analysis, Kalman filtering, and image registration. In a thorough evaluation, we compare it to state-of-the-art tracking methods. Our hybrid algorithm for predicting bronchoscope motion uses SIFT features and epipolar constraints to obtain an estimate for inter-frame pose displacements and Kalman filtering to find an estimate for the magnitude of the motion. We then execute bronchoscope tracking by performing image registration initialized by these estimates. This procedure registers the actual bronchoscopic video and the virtual camera images generated from 3D chest CT data taken prior to bronchoscopic examination for continuous bronchoscopic navigation. A comparative assessment of our new method and the state-of-the-art methods is performed on actual patient data and phantom data. Experimental results from both datasets demonstrate a significant performance boost of navigation using our new method. Our hybrid method is a promising means for bronchoscope tracking, and outperforms other methods based solely on Kalman filtering or image features and image registration.
medical image computing and computer assisted intervention | 2001
Kensaku Mori; Daisuke Deguchi; Junichi Hasegawa; Yasuhito Suenaga; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori
This paper describes a method for tracking the camera motion of a real endoscope by epipolar geometry analysis and image-based registration. In an endoscope navigation system, which provides navigation information to a medical doctor during an endoscopic examination, tracking the camera motion of the endoscopic camera is one of the fundamental functions. With a flexible endoscope, it is hard to directly sense the position of the camera, since we cannot attach a positional sensor at the tip of the endoscope. The proposed method consists of three parts: (1) calculation of corresponding point-pairs of two time-adjacent frames, (2) coarse estimation of the camera motion by solving the epipolar equation, and (3) fine estimation by executing image-based registration between real and virtual endoscopic views. In the method, virtual endoscopic views are obtained from X-ray CT images of real endoscopic images of the same patient. To evaluate the method, we applied it a real endoscopic video camera and X-ray CT images. The experimental results showed that the method could track the motion of the camera satisfactorily.
IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2011
Tse-Wei Chen; Chih-Hao Sun; Hsiao-Hang Su; Shao-Yi Chien; Daisuke Deguchi; Ichiro Ide; Hiroshi Murase
A power-efficient K-Means hardware architecture that can automatically estimate the number of clusters in the clustering process is proposed. The contributions of this work include two main aspects. The first is the integration of the hierarchical data sampling in the hardware to accelerate the clustering speed. The second is the development of the “Bayesian-Information-Criterion (BIC) Processor” to estimate the number of clusters of K-Means. The architecture of the “BIC Processor” is designed based on the simplification of the BIC computations, and the precision of the logarithm function is also analyzed. The experiments show that the proposed architecture can be employed in different multimedia applications, such as motion segmentation and edge-adaptive noise reduction. Besides, the gate count of the hardware is 51 K with the 90-nm complimentary metal-oxide-semiconductor technology. It is also shown that this work can achieve high efficiency compared with a GPU, and the power consumption scales well with the number of clusters and the number of dimensions. The power consumption ranges between 10.72 and 12.95 mW in different modes when the operating frequency is 233 MHz.
Proceedings of SPIE | 2009
Marco Feuerstein; Daisuke Deguchi; Takayuki Kitasaka; Shingo Iwano; Kazuyoshi Imaizumi; Yoshinori Hasegawa; Yasuhito Suenaga; Kensaku Mori
Computed tomography (CT) of the chest is a very common staging investigation for the assessment of mediastinal, hilar, and intrapulmonary lymph nodes in the context of lung cancer. In the current clinical workflow, the detection and assessment of lymph nodes is usually performed manually, which can be error-prone and timeconsuming. We therefore propose a method for the automatic detection of mediastinal, hilar, and intrapulmonary lymph node candidates in contrast-enhanced chest CT. Based on the segmentation of important mediastinal anatomy (bronchial tree, aortic arch) and making use of anatomical knowledge, we utilize Hessian eigenvalues to detect lymph node candidates. As lymph nodes can be characterized as blob-like structures of varying size and shape within a specific intensity interval, we can utilize these characteristics to reduce the number of false positive candidates significantly. We applied our method to 5 cases suspected to have lung cancer. The processing time of our algorithm did not exceed 6 minutes, and we achieved an average sensitivity of 82.1% and an average precision of 13.3%.
international conference on computer vision | 2010
Masafumi Noda; Tomokazu Takahashi; Daisuke Deguchi; Ichiro Ide; Hiroshi Murase; Yoshiko Kojima; Takashi Naito
Obtaining an accurate vehicle position is important for intelligent vehicles in supporting driver safety and comfort. This paper proposes an accurate ego-localization method by matching in-vehicle camera images to an aerial image. There are two major problems in performing an accurate matching: (1) image difference between the aerial image and the in-vehicle camera image due to view-point and illumination conditions, and (2) occlusions in the in-vehicle camera image. To solve the first problem, we use the SURF image descriptor, which achieves robust feature-point matching for the various image differences. Additionally, we extract appropriate feature-points from each road-marking region on the road plane in both images. For the second problem, we utilize sequential multiple in-vehicle camera frames in the matching. The experimental results demonstrate that the proposed method improves both ego-localization accuracy and stability.
medical image computing and computer assisted intervention | 2004
Jiro Nagao; Kensaku Mori; Tsutomu Enjouji; Daisuke Deguchi; Takayuki Kitasaka; Yasuhito Suenaga; Junichi Hasegawa; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori
This paper describes a method for faster and more accurate bronchoscope camera tracking by image registration and camera motion prediction using the Kalman filter. The position and orientation of the bronchoscope camera at a frame of a bronchoscopic video are predicted by the Kalman filter. Because the Kalman filter gives good prediction for image registration, estimation of the position and orientation of the bronchoscope tip converges fast and accurately. In spite of the usefulness of Kalman filters, there have been no reports on tracking bronchoscope camera motion using the Kalman filter. Experiments on eight pairs of real bronchoscopic video and chest CT images showed that the proposed method could track camera motion 2.5 times as fast as our previous method. Experimental results showed that the motion prediction increased the number of frames correctly and continuously tracked by about 4.5%, and the processing time was reduced by about 60% with the search space restriction also proposed in this paper.
international symposium on multimedia | 2009
Tomoki Okuoka; Tomokazu Takahashi; Daisuke Deguchi; Ichiro Ide; Hiroshi Murase
Wikipedia is a famous online encyclopedia. However most Wikipedia entries are mainly explained by text, so it will be very informative to enhance the contents with multimedia information such as videos. Thus we are working on a method to extend information of Wikipedia entries by means of broadcast videos which explain the entries. In this work, we focus especially on news videos and Wikipedia entries about news events. In order to extend information of Wikipedia entries, it is necessary to link news videos and Wikipedia entries. So the main issue will be on a method that labels news videos with Wikipedia entries automatically. In this way, explanations could be more detailed with news videos can be exhibited, and the context of the news events should become easier to understand. Through experiments, news videos were accurately labeled with Wikipedia entries with a precision of 86% and a recall of 79%.