Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hirotsugu Takabatake is active.

Publication


Featured researches published by Hirotsugu Takabatake.


Medical Image Analysis | 2002

Tracking of a bronchoscope using epipolar geometry analysis and intensity-based image registration of real and virtual endoscopic images†

Kensaku Mori; Daisuke Deguchi; Jun Sugiyama; Yasuhito Suenaga; Jun-ichiro Toriwaki; Calvin R. Maurer; Hirotsugu Takabatake; Hiroshi Natori

This paper describes a method for tracking the camera motion of a flexible endoscope, in particular a bronchoscope, using epipolar geometry analysis and intensity-based image registration. The method proposed here does not use a positional sensor attached to the endoscope. Instead, it tracks camera motion using real endoscopic (RE) video images obtained at the time of the procedure and X-ray CT images acquired before the endoscopic examination. A virtual endoscope system (VES) is used for generating virtual endoscopic (VE) images. The basic idea of this tracking method is to find the viewpoint and view direction of the VES that maximizes a similarity measure between the VE and RE images. To assist the parameter search process, camera motion is also computed directly from epipolar geometry analysis of the RE video images. The complete method consists of two steps: (a) rough estimation using epipolar geometry analysis and (b) precise estimation using intensity-based image registration. In the rough registration process, the method computes camera motion from optical flow patterns between two consecutive RE video image frames using epipolar geometry analysis. In the image registration stage, we search for the VES viewing parameters that generate the VE image that is most similar to the current RE image. The correlation coefficient and the mean square intensity difference are used for measuring image similarity. The result obtained in the rough estimation process is used for restricting the parameter search area. We applied the method to bronchoscopic video image data from three patients who had chest CT images. The method successfully tracked camera motion for about 600 consecutive frames in the best case. Visual inspection suggests that the tracking is sufficiently accurate for clinical use. Tracking results obtained by performing the method without the epipolar geometry analysis step were substantially worse. Although the method required about 20 s to process one frame, the results demonstrate the potential of image-based tracking for use in an endoscope navigation system.


Medical Image Analysis | 2009

Selective image similarity measure for bronchoscope tracking based on image registration

Daisuke Deguchi; Kensaku Mori; Marco Feuerstein; Takayuki Kitasaka; Calvin R. Maurer; Yasuhito Suenaga; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori

We propose a selective method of measurement for computing image similarities based on characteristic structure extraction and demonstrate its application to flexible endoscope navigation, in particular to a bronchoscope navigation system. Camera motion tracking is a fundamental function required for image-guided treatment or therapy systems. In recent years, an ultra-tiny electromagnetic sensor commercially became available, and many image-guided treatment or therapy systems use this sensor for tracking the camera position and orientation. However, due to space limitations, it is difficult to equip the tip of a bronchoscope with such a position sensor, especially in the case of ultra-thin bronchoscopes. Therefore, continuous image registration between real and virtual bronchoscopic images becomes an efficient tool for tracking the bronchoscope. Usually, image registration is done by calculating the image similarity between real and virtual bronchoscopic images. Since global schemes to measure image similarity, such as mutual information, squared gray-level difference, or cross correlation, average differences in intensity values over an entire region, they fail at tracking of scenes where less characteristic structures can be observed. The proposed method divides an entire image into a set of small subblocks and only selects those in which characteristic shapes are observed. Then image similarity is calculated within the selected subblocks. Selection is done by calculating feature values within each subblock. We applied our proposed method to eight pairs of chest X-ray CT images and bronchoscopic video images. The experimental results revealed that bronchoscope tracking using the proposed method could track up to 1600 consecutive bronchoscopic images (about 50s) without external position sensors. Tracking performance was greatly improved in comparison with a standard method utilizing squared gray-level differences of the entire images.


medical image computing and computer assisted intervention | 2005

Hybrid bronchoscope tracking using a magnetic tracking sensor and image registration

Kensaku Mori; Daisuke Deguchi; Kenta Akiyama; Takayuki Kitasaka; Calvin R. Maurer; Yasuhito Suenaga; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori

In this paper, we propose a hybrid method for tracking a bronchoscope that uses a combination of magnetic sensor tracking and image registration. The position of a magnetic sensor placed in the working channel of the bronchoscope is provided by a magnetic tracking system. Because of respiratory motion, the magnetic sensor provides only the approximate position and orientation of the bronchoscope in the coordinate system of a CT image acquired before the examination. The sensor position and orientation is used as the starting point for an intensity-based registration between real bronchoscopic video images and virtual bronchoscopic images generated from the CT image. The output transformation of the image registration process is the position and orientation of the bronchoscope in the CT image. We tested the proposed method using a bronchial phantom model. Virtual breathing motion was generated to simulate respiratory motion. The proposed hybrid method successfully tracked the bronchoscope at a rate of approximately 1 Hz.


Oncology | 2000

Combination Chemotherapy of Cisplatin, Ifosfamide, and Irinotecan with rhG-CSF Support in Patients with Brain Metastases from Non-Small Cell Lung Cancer

Akihisa Fujita; Seiji Fukuoka; Hirotsugu Takabatake; Shigeru Tagaki; Kyuhichiro Sekine

Background: Brain metastases develop frequently in patients with non-small cell lung cancer (NSCLC), and the prognosis for these patients is very poor. We evaluated the role of chemotherapy for patients with brain metastases from NSCLC. Methods: We analyzed 30 patients who were discovered to have brain metastases during the diagnosis of 121 patients enrolled in three consecutive clinical trials on advanced NSCLC assessing combination chemotherapy of cisplatin, ifosfamide and irinotecan with rhG-CSF support. Response in the brain lesions was evaluated by contrast-enhanced MRI scans after at least two courses. Results: Fourteen patients achieved a partial response (PR) but there was no change (NC) in 13 patients and progressive disease (PD) in 1. Among patients with extracranial lesions, 18 had a PR and 11 had NC. The response rate in brain metastases was 50.0%, and that in extracranial primary and metastatic lesions was 62.1%. The median duration of response for intra- and extracranial lesions was 140 and 147 days, respectively. After completing chemotherapy, Gamma Knife radiosurgery was performed on 2 patients in remission and 8 patients at disease progression. The median survival time and 1-year survival rate were 382 days and 56.1%, respectively. Conclusions: Both the response rate and survival data in this retrospective study suggest a high degree of activity of this combination chemotherapy in patients with brain metastases from NSCLC.


Respiration | 1999

Allergic bronchopulmonary aspergillosis due to Aspergillus niger without bronchial asthma.

Hideaki Hoshino; Shigeru Tagaki; Hayato Kon; Takashi Shibusa; Hirotsugu Takabatake; Akihisa Fujita; Kyuichiroh Sekine; Shosaku Abe

A 65-year-old woman was admitted to our hospital with a dry cough and pulmonary infiltrates. Chest radiograph and CT revealed mucoid impaction and consolidations. Peripheral blood eosinophilia and elevated serum IgE were observed. Aspergillus niger was cultured repeatedly from her sputum, but A. fumigatus was not detected. Immediate skin test and specific IgE (RAST) to Aspergillus antigen were positive. Precipitating antibodies were confirmed against A. niger antigen, but not against A. fumigatus antigen. She had no asthmatic symptoms, and showed no bronchial hyperreactivity to methacholine. Thus, this case was diagnosed as allergic bronchopulmonary aspergillosis (ABPA) without bronchial asthma due to A. niger, an organism rarely found in ABPA. The administration of prednisone improved the symptoms and corrected the abnormal laboratory findings.


Medical Image Analysis | 2012

Development and comparison of new hybrid motion tracking for bronchoscopic navigation.

Xióngbiāo Luó; Marco Feuerstein; Daisuke Deguchi; Takayuki Kitasaka; Hirotsugu Takabatake; Kensaku Mori

This paper presents a new hybrid camera motion tracking method for bronchoscopic navigation combining SIFT, epipolar geometry analysis, Kalman filtering, and image registration. In a thorough evaluation, we compare it to state-of-the-art tracking methods. Our hybrid algorithm for predicting bronchoscope motion uses SIFT features and epipolar constraints to obtain an estimate for inter-frame pose displacements and Kalman filtering to find an estimate for the magnitude of the motion. We then execute bronchoscope tracking by performing image registration initialized by these estimates. This procedure registers the actual bronchoscopic video and the virtual camera images generated from 3D chest CT data taken prior to bronchoscopic examination for continuous bronchoscopic navigation. A comparative assessment of our new method and the state-of-the-art methods is performed on actual patient data and phantom data. Experimental results from both datasets demonstrate a significant performance boost of navigation using our new method. Our hybrid method is a promising means for bronchoscope tracking, and outperforms other methods based solely on Kalman filtering or image features and image registration.


medical image computing and computer assisted intervention | 2001

A Method for Tracking the Camera Motion of Real Endoscope by Epipolar Geometry Analysis and Virtual Endoscopy System

Kensaku Mori; Daisuke Deguchi; Junichi Hasegawa; Yasuhito Suenaga; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori

This paper describes a method for tracking the camera motion of a real endoscope by epipolar geometry analysis and image-based registration. In an endoscope navigation system, which provides navigation information to a medical doctor during an endoscopic examination, tracking the camera motion of the endoscopic camera is one of the fundamental functions. With a flexible endoscope, it is hard to directly sense the position of the camera, since we cannot attach a positional sensor at the tip of the endoscope. The proposed method consists of three parts: (1) calculation of corresponding point-pairs of two time-adjacent frames, (2) coarse estimation of the camera motion by solving the epipolar equation, and (3) fine estimation by executing image-based registration between real and virtual endoscopic views. In the method, virtual endoscopic views are obtained from X-ray CT images of real endoscopic images of the same patient. To evaluate the method, we applied it a real endoscopic video camera and X-ray CT images. The experimental results showed that the method could track the motion of the camera satisfactorily.


Medical Imaging 2000: Physiology and Function from Multidimensional Images | 2000

Method for tracking camera motion of real endoscope by using virtual endoscopy system

Kensaku Mori; Yasuhito Suenaga; Jun-ichiro Toriwaki; Junichi Hasegawa; Kazuhiro Katada; Hirotsugu Takabatake; Hiroshi Natori

This paper proposes a method for tracking the camera motion of the real endoscope by using the virtual endoscopy system. One of the most important advantages of the virtual endoscopy is that the virtual endoscopy can visualize information of other organs that are existing under the wall of the target organ. If it is possible to track the viewpoint and the view direction of real endoscopy (fiberscope) in the examination of the patient and to display various information obtained by the virtual endoscopy onto the real endoscopic image, we construct a very useful system for assisting examination. When a sequence of real endoscopic images is inputted, tracking is performed by searching a sequence of viewpoints and view directions of virtual endoscope that correspond to camera motions of the real endoscope. First we roughly specify initial viewpoints and view directions that correspond to the first frame of the real endoscopic image. The method searches the best viewpoint and view direction by calculating matching ratio between a generated virtual endoscopic image and a real endoscopic image within the defined search area. Camera motion is also estimated by analyzing video images directly. We have applied the proposed method to video images of real bronchoscopy and X-ray CT images. The result showed that the method could track the camera motion of real endoscope.


computer assisted radiology and surgery | 2012

Automatic segmentation of pulmonary blood vessels and nodules based on local intensity structure analysis and surface propagation in 3D chest CT images

Bin Chen; Takayuki Kitasaka; Hirotoshi Honma; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori; Kensaku Mori

PurposePulmonary nodules may indicate the early stage of lung cancer, and the progress of lung cancer causes associated changes in the shape and number of pulmonary blood vessels. The automatic segmentation of pulmonary nodules and blood vessels is desirable for chest computer-aided diagnosis (CAD) systems. Since pulmonary nodules and blood vessels are often attached to each other, conventional nodule detection methods usually produce many false positives (FPs) in the blood vessel regions, and blood vessel segmentation methods may incorrectly segment the nodules that are attached to the blood vessels. A method to simultaneously and separately segment the pulmonary nodules and blood vessels was developed and tested.MethodA line structure enhancement (LSE) filter and a blob-like structure enhancement (BSE) filter were used to augment initial selection of vessel regions and nodule candidates, respectively. A front surface propagation (FSP) procedure was employed for precise segmentation of blood vessels and nodules. By employing a speed function that becomes fast at the initial vessel regions and slow at the nodule candidates to propagate the front surface, the front surface can be propagated to cover the blood vessel region with suppressed nodules. Hence, the resultant region covered by the front surface indicates pulmonary blood vessels. The lung nodule regions were finally obtained by removing the nodule candidates that are covered by the front surface.ResultA test data set was assembled including 20 standard-dose chest CT images obtained from a local database and 20 low-dose chest CT images obtained from lung image database consortium (LIDC). The average extraction rate of the pulmonary blood vessels was about 93%. The average TP rate of nodule detection was 95% with 9.8 FPs/case in standard-dose CT image, and 91.5% with 10.5 FPs/case in low-dose CT image, respectively.ConclusionPulmonary blood vessels and nodules segmentation method based on local intensity structure analysis and front surface propagation were developed. The method was shown to be feasible for nodule detection and vessel extraction in chest CAD.


Oncology | 2000

Combination of Cisplatin, Ifosfamide, and Irinotecan with rhG-CSF Support for the Treatment of Refractory or Relapsed Small-Cell Lung Cancer

Akihisa Fujita; Hirotsugu Takabatake; Shigeru Tagaki; Kyuhichiro Sekine

Objective: This study was conducted in refractory or relapsed small-cell lung cancer to determine activity and toxicity of the combination of cisplatin, ifosfamide, and irinotecan with rhG-CSF support. Methods: Eighteen patients entered the trial. The median chemotherapy-free interval was 3.1 (range 1.0–14.5) months. Cisplatin (20 mg/m2) and ifosfamide (1.5 g/m2) were administered on days 1–4, and irinotecan (60 mg/m2) was administered on days 1, 8, and 15. In patients who experienced grade 4 hematological toxicity during the prior chemotherapy, the doses of cisplatin and irinotecan were reduced to 15 and 50 mg/m2, respectively. After 10 patients were entered, cisplatin and irinotecan were administered at doses of 15 and 50 mg/m2, respectively. This regimen was repeated every 4 weeks. rhG-CSF was administered subcutaneously at a dose of 50 μg/m2 from days 50 to 18, except on the day of irinotecan treatment. Results: All patients could be assessed for response and toxicity. There were 1 complete and 16 partial responses, and an overall response rate of 94.4%. The median survival time of all patients was 339 days, and the 1-year survival rate was 47.5%. Hematological toxicities were significant. Grade 4 neutropenia and thrombocytopenia were observed in 61 and 33% of the patients, respectively. Diarrhea was mild and transient. There was no treatment-related death. Conclusion: The combination of cisplatin, ifosfamide, and irinotecan with rhG-CSF support was highly active for the treatment of refractory or relapsed small-cell lung cancer.

Collaboration


Dive into the Hirotsugu Takabatake's collaboration.

Top Co-Authors

Avatar

Hiroshi Natori

Sapporo Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takayuki Kitasaka

Aichi Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hirotoshi Honma

Sapporo Medical University

View shared research outputs
Top Co-Authors

Avatar

Shigeru Nawano

International University of Health and Welfare

View shared research outputs
Researchain Logo
Decentralizing Knowledge