Jun-ichiro Toriwaki
Chukyo University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jun-ichiro Toriwaki.
Medical Image Analysis | 2002
Kensaku Mori; Daisuke Deguchi; Jun Sugiyama; Yasuhito Suenaga; Jun-ichiro Toriwaki; Calvin R. Maurer; Hirotsugu Takabatake; Hiroshi Natori
This paper describes a method for tracking the camera motion of a flexible endoscope, in particular a bronchoscope, using epipolar geometry analysis and intensity-based image registration. The method proposed here does not use a positional sensor attached to the endoscope. Instead, it tracks camera motion using real endoscopic (RE) video images obtained at the time of the procedure and X-ray CT images acquired before the endoscopic examination. A virtual endoscope system (VES) is used for generating virtual endoscopic (VE) images. The basic idea of this tracking method is to find the viewpoint and view direction of the VES that maximizes a similarity measure between the VE and RE images. To assist the parameter search process, camera motion is also computed directly from epipolar geometry analysis of the RE video images. The complete method consists of two steps: (a) rough estimation using epipolar geometry analysis and (b) precise estimation using intensity-based image registration. In the rough registration process, the method computes camera motion from optical flow patterns between two consecutive RE video image frames using epipolar geometry analysis. In the image registration stage, we search for the VES viewing parameters that generate the VE image that is most similar to the current RE image. The correlation coefficient and the mean square intensity difference are used for measuring image similarity. The result obtained in the rough estimation process is used for restricting the parameter search area. We applied the method to bronchoscopic video image data from three patients who had chest CT images. The method successfully tracked camera motion for about 600 consecutive frames in the best case. Visual inspection suggests that the tracking is sufficiently accurate for clinical use. Tracking results obtained by performing the method without the epipolar geometry analysis step were substantially worse. Although the method required about 20 s to process one frame, the results demonstrate the potential of image-based tracking for use in an endoscope navigation system.
medical image computing and computer assisted intervention | 2001
Kensaku Mori; Daisuke Deguchi; Junichi Hasegawa; Yasuhito Suenaga; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori
This paper describes a method for tracking the camera motion of a real endoscope by epipolar geometry analysis and image-based registration. In an endoscope navigation system, which provides navigation information to a medical doctor during an endoscopic examination, tracking the camera motion of the endoscopic camera is one of the fundamental functions. With a flexible endoscope, it is hard to directly sense the position of the camera, since we cannot attach a positional sensor at the tip of the endoscope. The proposed method consists of three parts: (1) calculation of corresponding point-pairs of two time-adjacent frames, (2) coarse estimation of the camera motion by solving the epipolar equation, and (3) fine estimation by executing image-based registration between real and virtual endoscopic views. In the method, virtual endoscopic views are obtained from X-ray CT images of real endoscopic images of the same patient. To evaluate the method, we applied it a real endoscopic video camera and X-ray CT images. The experimental results showed that the method could track the motion of the camera satisfactorily.
medical image computing and computer assisted intervention | 2004
Jiro Nagao; Kensaku Mori; Tsutomu Enjouji; Daisuke Deguchi; Takayuki Kitasaka; Yasuhito Suenaga; Junichi Hasegawa; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori
This paper describes a method for faster and more accurate bronchoscope camera tracking by image registration and camera motion prediction using the Kalman filter. The position and orientation of the bronchoscope camera at a frame of a bronchoscopic video are predicted by the Kalman filter. Because the Kalman filter gives good prediction for image registration, estimation of the position and orientation of the bronchoscope tip converges fast and accurately. In spite of the usefulness of Kalman filters, there have been no reports on tracking bronchoscope camera motion using the Kalman filter. Experiments on eight pairs of real bronchoscopic video and chest CT images showed that the proposed method could track camera motion 2.5 times as fast as our previous method. Experimental results showed that the motion prediction increased the number of frames correctly and continuously tracked by about 4.5%, and the processing time was reduced by about 60% with the search space restriction also proposed in this paper.
Medical Imaging 2002: Image Processing | 2002
Takayuki Kitasaka; Kensaku Mori; Junichi Hasegawa; Jun-ichiro Toriwaki; Kazuhiro Katada
This paper proposes a method for automated extraction of the aorta and pulmonary artery (PA) in the mediastinum of the chest from uncontrasted chest X-ray CT images. The proposed method employs a model fitting technique to use shape features of blood vessels for extraction. First, edge voxels are detected based on the standard deviation of CT values. A likelihood image, which shows the degree of likelihood on medial axes of vessels, are calculated by applying the Euclidean distance transformation to non-edge voxels. Second, the medial axis of each vessel is obtained by fitting the model. This is done by referring the likelihood image. Finally, the aorta and PA areas are recovered from the medial axes by executing the reverse Euclidean distance transformation. We applied the proposed method to seven cases of uncontrasted chest X-ray CT images and evaluated the results by calculating the coincidence index computed from the extracted regions and the regions manually traced. Experimental results showed that the extracted aorta and the PA areas coincides with manually input regions with the coincidence indexes values 90% and 80-90%,respectively.
medical image computing and computer assisted intervention | 2003
Takayuki Kitasaka; Kensaku Mori; Yasuhito Suenaga; Junichi Hasegawa; Jun-ichiro Toriwaki
This paper presents a new method for extracting bronchus regions from 3D chest X-ray CT images based on structural features of the bronchus. This method enhances bronchial walls by applying a sharpening operation and segments each bronchial branch by recognizing the tree structure starting from the trachea. During the extraction process, the volumes of interest (VOI) which contains a bronchial branch currently being processed are defined. Region growing is performed only inside a VOI so that a bronchial branch is extracted by a suitable threshold value. The final bronchus region is obtained by unifying the extracted branches. The tree structure of the bronchus is also extracted simultaneously. The proposed method was applied to three cases of 3D chest X-ray CT images. The experimental results showed that the method significantly improved extraction accuracy. About 82% branches are extracted for 4th-order bronchi, 49% for 5th-order bronchi, and 20% for 6th-order bronchi, compared to 45%, 16%, and 3% by the previous method using the region growing method with a constant threshold value.
Academic Radiology | 2003
Yuichiro Hayashi; Kensaku Mori; Junichi Hasegawa; Yasuhito Suenaga; Jun-ichiro Toriwaki
Rationale and Objectives. When virtual endoscopy is used as a diagnostic tool, especially as a tool for detecting colon polyps, the user often performs automated fly-through based on automatically generated paths. In the case of automated fly-through in the colon, there are some blind areas at the backs of folds. The aim of this study is to detect undisplayed regions during fly-through and to perform quantitative evaluation. Materials and Methods. Undisplayed regions are detected by marking displayed triangles for surface rendering or displayed voxels for volume rendering. The voxels or triangles not having displayed marks are considered to be undisplayed triangles or voxels. Various kinds of automated fly-through paths generated from medial axes of the colon and flattened views of the colon from the viewpoint of the rate of undisplayed regions are evaluated. Results. The experiment results show that about 30% of colon regions are classified as undisplayed regions by the conventional automated fly-through along the medial axis and that the flattened view results in very few undisplayed regions. Conclusion. There is a possibility that the automated fly-through methods may cause many undisplayed regions.
medical image computing and computer assisted intervention | 2005
Masahiro Oda; Takayuki Kitasaka; Yuichiro Hayashi; Kensaku Mori; Yasuhito Suenaga; Jun-ichiro Toriwaki
We propose a navigation-based computer aided diagnosis (CAD) system for the colon. When diagnosing the colon using virtual colonoscopy (VC), a physician makes a diagnosis by navigating (flying-through) the colon. However, the viewpoints and the viewing directions must be changed many times because the colon is a very long and winding organ with many folds. This is a time-consuming task for physicians. We propose a new navigation-based CAD system for the colon providing virtual unfolded (VU) views, which enables physicians to observe a large area of the colonic wall at a glance. This system generates VU, VC, and CT slice views that are perfectly synchronized. Polyp candidates, which are detected automatically, are overlaid on them. We applied the system to abdominal CT images. The experimental results showed that the system effectively generates VU views for observing colon regions.
computer assisted radiology and surgery | 2003
Takayuki Kitasaka; Kensaku Mori; Junichi Hasegawa; Yasuhito Suenaga; Jun-ichiro Toriwaki
Abstract This paper presents a new method for extracting bronchus regions from 3D chest X-ray CT images based on structural features of the bronchus. This method enhances bronchial walls by applying a sharpening operation and segments each bronchial branch by recognizing the tree structure starting from the trachea. During the extraction process, the volumes of interests (VOI) which contain the bronchial branch currently being processed are defined. Region growing is performed only inside a VOI so that a bronchial branch is extracted by a suitable threshold value. The final bronchus region is obtained by unifying the extracted branches. The tree structure of the bronchus is also extracted simultaneously. The proposed method was applied to three cases of 3D chest X-ray CT images. The experimental results showed that the method significantly improved extraction accuracy. About 82% of the branches is extracted for the 4th-order bronchi, 49% for 5th-order bronchi, and 20% for 6th-order bronchi, compared to 45%, 16%, and 3% by the previous method using the region-growing method with a constant threshold value.
medical image computing and computer assisted intervention | 2004
Takayuki Kitasaka; Kensaku Mori; Yuichiro Hayashi; Yasuhito Suenaga; Makoto Hashizume; Jun-ichiro Toriwaki
This paper describes a method for generating virtual pneumoperitoneum based on volumetric deformation and its application to virtual laparoscopy. Laparoscopic surgery is now widely performed as a minimum-invasive surgery. Because a laparoscope has a very narrow viewing area, this limits the surgeon’s viewable area. Making views that the abdominal wall is virtually elevated (virtual pneumoperitoneum) will be very helpful for intra-operative surgical navigation or pre-operative surgical planning. We deform original 3-D abdominal CT images so that the abdominal wall is virtually elevated. The entire process consists of five major steps: (a) extracting the abdominal wall, (b) elastic modeling, (c) elastic deformation of the model, (d) deformation of the original image, and (e) rendering virtual laparoscopic images. Virtual laparoscopic images are then generated from the deformed image. We have applied the method to three cases of 3-D abdominal CT images. From the experimental results, we confirmed that the abdominal wall was appropriately elevated by the proposed method. Laparoscopic views were very helpful for intra-operative surgical navigation as additional views of a surgeon or pre-operative surgical planning.
medical image computing and computer assisted intervention | 2003
Daisuke Deguchi; Kensaku Mori; Yasuhito Suenaga; Junichi Hasegawa; Jun-ichiro Toriwaki; Hirotsugu Takabatake; Hiroshi Natori
This paper presents new image similarity measure for bronchoscope tracking based on image registration between real and virtual endoscopic images. A function for bronchoscope tracking is one of the fundamental functions in a bronchoscope navigation system. Since it is difficult to attach a positional sensor at the tip of the bronchoscope due to the space limitation, image registration between real endoscopic (RE) and virtual endoscopic (VE) images becomes a strong tool for bronchoscopic camera motion tracking. The summing-type image similarity measuring methods including mean squared error or mutual information could not properly estimate the position and orientation of the endoscope, since the outputs of these methods do not change significantly due to averaging. This paper proposes new image similarity measure that effectively uses characteristic structures observed in bronchoscopic views in similarity computation. This method divides the original image into a set of small subblocks and selects only the subblocks in which characteristic shapes are seen. Then, an image similarity value is calculated only inside the selected subblocks. We applied the proposed method to eight pairs of X-ray CT images and real bronchoscopic videos. The experimental showed much improvement in continuous tracking performance. Nearly 1000 consecutive frames were tracked correctly.