Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroshi Natori is active.

Publication


Featured researches published by Hiroshi Natori.


Medical Image Analysis | 2002

Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images†

Kensaku Mori; Daisuke Deguchi; Jun Sugiyama; Yasuhito Suenaga; Jun-ichiro Toriwaki; Calvin R. Maurer; Hirotsugu Takabatake; Hiroshi Natori

This paper describes a method for tracking the camera motion of a flexible endoscope, in particular a bronchoscope, using epipolar geometry analysis and intensity-based image registration. The method proposed here does not use a positional sensor attached to the endoscope. Instead, it tracks camera motion using real endoscopic (RE) video images obtained at the time of the procedure and X-ray CT images acquired before the endoscopic examination. A virtual endoscope system (VES) is used for generating virtual endoscopic (VE) images. The basic idea of this tracking method is to find the viewpoint and view direction of the VES that maximizes a similarity measure between the VE and RE images. To assist the parameter search process, camera motion is also computed directly from epipolar geometry analysis of the RE video images. The complete method consists of two steps: (a) rough estimation using epipolar geometry analysis and (b) precise estimation using intensity-based image registration. In the rough registration process, the method computes camera motion from optical flow patterns between two consecutive RE video image frames using epipolar geometry analysis. In the image registration stage, we search for the VES viewing parameters that generate the VE image that is most similar to the current RE image. The correlation coefficient and the mean square intensity difference are used for measuring image similarity. The result obtained in the rough estimation process is used for restricting the parameter search area. We applied the method to bronchoscopic video image data from three patients who had chest CT images. The method successfully tracked camera motion for about 600 consecutive frames in the best case. Visual inspection suggests that the tracking is sufficiently accurate for clinical use. Tracking results obtained by performing the method without the epipolar geometry analysis step were substantially worse. Although the method required about 20 s to process one frame, the results demonstrate the potential of image-based tracking for use in an endoscope navigation system.


Medical Image Analysis | 2009

Selective image similarity measure for bronchoscope tracking based on image registration

Daisuke Deguchi; Kensaku Mori; Marco Feuerstein; Takayuki Kitasaka; Calvin R. Maurer; Yasuhito Suenaga; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori

We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.


medical image computing and computer assisted intervention | 2005

Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration

Kensaku Mori; Daisuke Deguchi; Kenta Akiyama; Takayuki Kitasaka; Calvin R. Maurer; Yasuhito Suenaga; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori

In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz.


Journal of Clinical Ultrasound | 1998

Carcinoma arising from ectopic pancreas in the stomach: Endosonographic detection of malignant change

Hideki Ura; Ryuichi Denno; Koichi Hirata; Akiko Saeki; Kenichiro Hirata; Hiroshi Natori

We present a case of a submucosal tumor in the stomach that was suspicious for malignancy on preoperative endosonography. The resected tumor was histologically diagnosed as a ductal adenocarcinoma that originated in ectopic pancreatic tissue in the gastric wall. Although malignant transformation in ectopic pancreas is extremely rare, it remains an important consideration in the differential diagnosis of gastric submucosal masses.


World Journal of Surgery | 2000

Endoscopic ultrasonography of the esophagus

Morimichi Fukuda; Kenichiro Hirata; Hiroshi Natori

Endoscopic ultrasonography (EUS) is a generally accepted technique for the preoperative staging of malignant tumors in the upper and lower gastrointestinal tracts. In particular, EUS has been considered the method of choice in diagnosing esophageal carcinoma due to the relative ease in performing the examination and the accuracy of staging based on high-resolution ultrasonic imaging from within the lumen of the esophagus. This comprehensive review covers currently available EUS instruments, image characteristics of esophageal carcinoma, and images by the recently introduced miniprobe scanner. The role of the method in diagnosing superficial esophageal carcinoma and the possible treatment by endoscopic mucosal resection of this particular disease entity are discussed.


medical image computing and computer assisted intervention | 2001

A Method for Tracking the Camera Motion of Real Endoscope by Epipolar Geometry Analysis and Virtual Endoscopy System

Kensaku Mori; Daisuke Deguchi; Junichi Hasegawa; Yasuhito Suenaga; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori

This paper describes a method for tracking the camera motion of a real endoscope by epipolar geometry analysis and image-based registration. In an endoscope navigation system, which provides navigation information to a medical doctor during an endoscopic examination, tracking the camera motion of the endoscopic camera is one of the fundamental functions. With a flexible endoscope, it is hard to directly sense the position of the camera, since we cannot attach a positional sensor at the tip of the endoscope. The proposed method consists of three parts: (1) calculation of corresponding point-pairs of two time-adjacent frames, (2) coarse estimation of the camera motion by solving the epipolar equation, and (3) fine estimation by executing image-based registration between real and virtual endoscopic views. In the method, virtual endoscopic views are obtained from X-ray CT images of real endoscopic images of the same patient. To evaluate the method, we applied it a real endoscopic video camera and X-ray CT images. The experimental results showed that the method could track the motion of the camera satisfactorily.


Medical Imaging 2000: Physiology and Function from Multidimensional Images | 2000

Method for tracking camera motion of real endoscope by using virtual endoscopy system

Kensaku Mori; Yasuhito Suenaga; Jun-ichiro Toriwaki; Junichi Hasegawa; Kazuhiro Katada; Hirotsugu Takabatake; Hiroshi Natori

This paper proposes a method for tracking the camera motion of the real endoscope by using the virtual endoscopy system. One of the most important advantages of the virtual endoscopy is that the virtual endoscopy can visualize information of other organs that are existing under the wall of the target organ. If it is possible to track the viewpoint and the view direction of real endoscopy (fiberscope) in the examination of the patient and to display various information obtained by the virtual endoscopy onto the real endoscopic image, we construct a very useful system for assisting examination. When a sequence of real endoscopic images is inputted, tracking is performed by searching a sequence of viewpoints and view directions of virtual endoscope that correspond to camera motions of the real endoscope. First we roughly specify initial viewpoints and view directions that correspond to the first frame of the real endoscopic image. The method searches the best viewpoint and view direction by calculating matching ratio between a generated virtual endoscopic image and a real endoscopic image within the defined search area. Camera motion is also estimated by analyzing video images directly. We have applied the proposed method to video images of real bronchoscopy and X-ray CT images. The result showed that the method could track the camera motion of real endoscope.


computer assisted radiology and surgery | 2012

Automatic segmentation of pulmonary blood vessels and nodules based on local intensity structure analysis and surface propagation in 3D chest CT images

Bin Chen; Takayuki Kitasaka; Hirotoshi Honma; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori; Kensaku Mori

PurposePulmonary nodules may indicate the early stage of lung cancer, and the progress of lung cancer causes associated changes in the shape and number of pulmonary blood vessels. The automatic segmentation of pulmonary nodules and blood vessels is desirable for chest computer-aided diagnosis (CAD) systems. Since pulmonary nodules and blood vessels are often attached to each other, conventional nodule detection methods usually produce many false positives (FPs) in the blood vessel regions, and blood vessel segmentation methods may incorrectly segment the nodules that are attached to the blood vessels. A method to simultaneously and separately segment the pulmonary nodules and blood vessels was developed and tested.MethodA line structure enhancement (LSE) filter and a blob-like structure enhancement (BSE) filter were used to augment initial selection of vessel regions and nodule candidates, respectively. A front surface propagation (FSP) procedure was employed for precise segmentation of blood vessels and nodules. By employing a speed function that becomes fast at the initial vessel regions and slow at the nodule candidates to propagate the front surface, the front surface can be propagated to cover the blood vessel region with suppressed nodules. Hence, the resultant region covered by the front surface indicates pulmonary blood vessels. The lung nodule regions were finally obtained by removing the nodule candidates that are covered by the front surface.ResultA test data set was assembled including 20 standard-dose chest CT images obtained from a local database and 20 low-dose chest CT images obtained from lung image database consortium (LIDC). The average extraction rate of the pulmonary blood vessels was about 93%. The average TP rate of nodule detection was 95% with 9.8 FPs/case in standard-dose CT image, and 91.5% with 10.5 FPs/case in low-dose CT image, respectively.ConclusionPulmonary blood vessels and nodules segmentation method based on local intensity structure analysis and front surface propagation were developed. The method was shown to be feasible for nodule detection and vessel extraction in chest CAD.


medical image computing and computer assisted intervention | 2004

Fast and Accurate Bronchoscope Tracking Using Image Registration and Motion Prediction

Jiro Nagao; Kensaku Mori; Tsutomu Enjouji; Daisuke Deguchi; Takayuki Kitasaka; Yasuhito Suenaga; Junichi Hasegawa; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori

This paper describes a method for faster and more accurate bronchoscope camera tracking by image registration and camera motion prediction using the Kalman filter. The position and orientation of the bronchoscope camera at a frame of a bronchoscopic video are predicted by the Kalman filter. Because the Kalman filter gives good prediction for image registration, estimation of the position and orientation of the bronchoscope tip converges fast and accurately. In spite of the usefulness of Kalman filters, there have been no reports on tracking bronchoscope camera motion using the Kalman filter. Experiments on eight pairs of real bronchoscopic video and chest CT images showed that the proposed method could track camera motion 2.5 times as fast as our previous method. Experimental results showed that the motion prediction increased the number of frames correctly and continuously tracked by about 4.5%, and the processing time was reduced by about 60% with the search space restriction also proposed in this paper.


medical image computing and computer assisted intervention | 2009

Automated Anatomical Labeling of Bronchial Branches Extracted from CT Datasets Based on Machine Learning and Combination Optimization and Its Application to Bronchoscope Guidance

Kensaku Mori; Shunsuke Ota; Daisuke Deguchi; Takayuki Kitasaka; Yasuhito Suenaga; Shingo Iwano; Yosihnori Hasegawa; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori

This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.

Collaboration


Dive into the Hiroshi Natori's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takayuki Kitasaka

Aichi Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge