Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hirohisa Oda is active.

Publication


Featured researches published by Hirohisa Oda.


Computerized Medical Imaging and Graphics | 2018

An application of cascaded 3D fully convolutional networks for medical image segmentation

Holger R. Roth; Hirohisa Oda; Xiangrong Zhou; Natsuki Shimizu; Ying Yang; Yuichiro Hayashi; Masahiro Oda; Michitaka Fujiwara; Kazunari Misawa; Kensaku Mori

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ∼10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results.1.


arXiv: Computer Vision and Pattern Recognition | 2018

Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks

Holger R. Roth; Masahiro Oda; Natsuki Shimizu; Hirohisa Oda; Yuichiro Hayashi; Takayuki Kitasaka; Michitaka Fujiwara; Kazunari Misawa; Kensaku Mori

Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 ± 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.


International Workshop on Patch-based Techniques in Medical Imaging | 2017

Micro-CT Guided 3D Reconstruction of Histological Images

Kai Nagara; Holger R. Roth; Shota Nakamura; Hirohisa Oda; Takayasu Moriya; Masahiro Oda; Kensaku Mori

Histological images are very important for diagnosis of cancer and other diseases. However, during the preparation of the histological slides for microscopy, the 3D information of the tissue specimen gets lost. Therefore, many 3D reconstruction methods for histological images have been proposed. However, most approaches rely on the histological 2D images alone, which makes 3D reconstruction difficult due to the large deformations introduced by cutting and preparing the histological slides. In this work, we propose an image-guided approach to 3D reconstruction of histological images. Before histological preparation of the slides, the specimen is imaged using X-ray microtomography (micro CT). We can then align each histological image back to the micro CT image utilizing non-rigid registration. Our registration results show that our method can provide smooth 3D reconstructions with micro CT guidance.


medical image computing and computer assisted intervention | 2018

A Multi-scale Pyramid of 3D Fully Convolutional Networks for Abdominal Multi-organ Segmentation

Holger R. Roth; Chen Shen; Hirohisa Oda; Takaaki Sugino; Masahiro Oda; Yuichiro Hayashi; Kazunari Misawa; Kensaku Mori

Recent advances in deep learning, like 3D fully convolutional networks (FCNs), have improved the state-of-the-art in dense semantic segmentation of medical images. However, most network architectures require severely downsampling or cropping the images to meet the memory limitations of today’s GPU cards while still considering enough context in the images for accurate segmentation. In this work, we propose a novel approach that utilizes auto-context to perform semantic segmentation at higher resolutions in a multi-scale pyramid of stacked 3D FCNs. We train and validate our models on a dataset of manually annotated abdominal organs and vessels from 377 clinical CT images used in gastric surgery, and achieve promising results with close to 90% Dice score on average. For additional evaluation, we perform separate testing on datasets from different sources and achieve competitive results, illustrating the robustness of the model and approach.


arXiv: Computer Vision and Pattern Recognition | 2018

Unsupervised segmentation of 3D medical images based on clustering and deep representation learning

Takayasu Moriya; Holger R. Roth; Shota Nakamura; Hirohisa Oda; Kai Nagara; Masahiro Oda; Kensaku Mori

This paper presents a novel unsupervised segmentation method for 3D medical images. Convolutional neural networks (CNNs) have brought significant advances in image segmentation. However, most of the recent methods rely on supervised learning, which requires large amounts of manually annotated data. Thus, it is challenging for these methods to cope with the growing amount of medical images. This paper proposes a unified approach to unsupervised deep representation learning and clustering for segmentation. Our proposed method consists of two phases. In the first phase, we learn deep feature representations of training patches from a target image using joint unsupervised learning (JULE) that alternately clusters representations generated by a CNN and updates the CNN parameters using cluster labels as supervisory signals. We extend JULE to 3D medical images by utilizing 3D convolutions throughout the CNN architecture. In the second phase, we apply k-means to the deep representations from the trained CNN and then project cluster labels to the target image in order to obtain the fully segmented image. We evaluated our methods on three images of lung cancer specimens scanned with micro-computed tomography (micro-CT). The automatic segmentation of pathological regions in micro-CT could further contribute to the pathological examination process. Hence, we aim to automatically divide each image into the regions of invasive carcinoma, noninvasive carcinoma, and normal tissue. Our experiments show the potential abilities of unsupervised deep representation learning for medical image segmentation.


arXiv: Computer Vision and Pattern Recognition | 2018

Unsupervised pathology image segmentation using representation learning with spherical k-means.

Takayasu Moriya; Holger R. Roth; Shota Nakamura; Hirohisa Oda; Kai Nagara; Masahiro Oda; Kensaku Mori

This paper presents a novel method for unsupervised segmentation of pathology images. Staging of lung cancer is a major factor of prognosis. Measuring the maximum dimensions of the invasive component in a pathology images is an essential task. Therefore, image segmentation methods for visualizing the extent of invasive and noninvasive components on pathology images could support pathological examination. However, it is challenging for most of the recent segmentation methods that rely on supervised learning to cope with unlabeled pathology images. In this paper, we propose a unified approach to unsupervised representation learning and clustering for pathology image segmentation. Our method consists of two phases. In the first phase, we learn feature representations of training patches from a target image using the spherical k-means. The purpose of this phase is to obtain cluster centroids which could be used as filters for feature extraction. In the second phase, we apply conventional k-means to the representations extracted by the centroids and then project cluster labels to the target images. We evaluated our methods on pathology images of lung cancer specimen. Our experiments showed that the proposed method outperforms traditional k-means segmentation and the multithreshold Otsu method both quantitatively and qualitatively with an improved normalized mutual information (NMI) score of 0.626 compared to 0.168 and 0.167, respectively. Furthermore, we found that the centroids can be applied to the segmentation of other slices from the same sample.


Medical imaging technology | 2018

Deep Learning and Its Application to Medical Image Segmentation

Holger R. Roth; Chen Shen; Hirohisa Oda; Masahiro Oda; Yuichiro Hayashi; Kazunari Misawa; Kensaku Mori

One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows state-of-the-art performance in multi-organ segmentation.


Medical Imaging 2018: Computer-Aided Diagnosis | 2018

Dense volumetric detection and segmentation of mediastinal lymph nodes in chest CT images.

Hirohisa Oda; Holger R. Roth; Kanwal K. Bhatia; Masahiro Oda; Takayuki Kitasaka; Shingo Iwano; Hirotoshi Homma; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori; Julia A. Schnabel; Kensaku Mori

We propose a novel mediastinal lymph node detection and segmentation method from chest CT volumes based on fully convolutional networks (FCNs). Most lymph node detection methods are based on filters for blob-like structures, which are not specific for lymph nodes. The 3D U-Net is a recent example of the state-of-the-art 3D FCNs. The 3D U-Net can be trained to learn appearances of lymph nodes in order to output lymph node likelihood maps on input CT volumes. However, it is prone to oversegmentation of each lymph node due to the strong data imbalance between lymph nodes and the remaining part of the CT volumes. To moderate the balance of sizes between the target classes, we train the 3D U-Net using not only lymph node annotations but also other anatomical structures (lungs, airways, aortic arches, and pulmonary arteries) that can be extracted robustly in an automated fashion. We applied the proposed method to 45 cases of contrast-enhanced chest CT volumes. Experimental results showed that 95.5% of lymph nodes were detected with 16.3 false positives per CT volume. The segmentation results showed that the proposed method can prevent oversegmentation, achieving an average Dice score of 52.3 ± 23.1%, compared to the baseline method with 49.2 ± 23.8%, respectively.


medical image computing and computer-assisted intervention | 2017

TBS: Tensor-based supervoxels for unfolding the heart

Hirohisa Oda; Holger R. Roth; Kanwal K. Bhatia; Masahiro Oda; Takayuki Kitasaka; Toshiaki Akita; Julia A. Schnabel; Kensaku Mori

Investigation of the myofiber structure of the heart is desired for studies of anatomy and diseases. However, it is difficult to understand the left ventricle structure intuitively because it consists of three layers with different myofiber orientations. In this work, we propose an unfolding method for micro-focus X-ray CT (\(\mu \)CT) volumes of the heart. First, we explore a novel supervoxel over-segmentation technique, Tensor-Based Supervoxels (TBS), which allows us to divide the left ventricle into three layers. We utilize TBS and B-spline curves for extraction of the layers. Finally we project \(\mu \)CT intensities in each layer to an unfolded view. Experiments are performed using three \(\mu \)CT images of the left ventricle acquired from canine heart specimens. In all cases, the myofiber structure could be observed clearly in the unfolded views. This is promising for helping cardiac studies.


Proceedings of SPIE | 2017

Hessian-assisted supervoxel: Structure-oriented voxel clustering and application to mediastinal lymph node detection from CT volumes

Hirohisa Oda; Kanwal K. Bhatia; Masahiro Oda; Takayuki Kitasaka; Shingo Iwano; Hirotoshi Homma; Hirotsugu Takabatake; Masaki Mori; Hiroshi Natori; Julia A. Schnabel; Kensaku Mori

In this paper, we propose a novel supervoxel segmentation method designed for mediastinal lymph node by embedding Hessian-based feature extraction. Starting from a popular supervoxel segmentation method, SLIC, which computes supervoxels by minimising differences of intensity and distance, we overcome this methods limitation of merging neighboring regions with similar intensity by introducing Hessian-based feature analysis into the supervoxel formation. We call this structure-oriented voxel clustering, which allows more accurate division into distinct regions having blob-, line- or sheet-like structures. This way, different tissue types in chest CT volumes can be segmented individually, even if neighboring tissues have similar intensity or are of non- spherical extent. We demonstrate the performance of the Hessian-assisted supervoxel technique by applying it to mediastinal lymph node detection in 47 chest CT volumes, resulting in false positive reductions from lymph node candidate regions. 89 % of lymph nodes whose short axis is at least 10 mm could be detected with 5.9 false positives per case using our method, compared to our previous method having 83 % of detection rate with 6.4 false positives per case.

Collaboration


Dive into the Hirohisa Oda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takayuki Kitasaka

Aichi Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroshi Natori

Sapporo Medical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge