Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshihisa Shinagawa is active.

Publication


Featured researches published by Yoshihisa Shinagawa.


IEEE Transactions on Medical Imaging | 2016

Multi-Instance Deep Learning: Discover Discriminative Local Anatomies for Bodypart Recognition

Zhennan Yan; Yiqiang Zhan; Zhigang Peng; Shu Liao; Yoshihisa Shinagawa; Shaoting Zhang; Dimitris N. Metaxas; Xiang Sean Zhou

In general image recognition problems, discriminative information often lies in local image patches. For example, most human identity information exists in the image patches containing human faces. The same situation stays in medical images as well. “Bodypart identity” of a transversal slice-which bodypart the slice comes from-is often indicated by local image information, e.g., a cardiac slice and an aorta arch slice are only differentiated by the mediastinum region. In this work, we design a multi-stage deep learning framework for image classification and apply it on bodypart recognition. Specifically, the proposed framework aims at: 1) discover the local regions that are discriminative and non-informative to the image classification problem, and 2) learn a image-level classifier based on these local regions. We achieve these two tasks by the two stages of learning scheme, respectively. In the pre-train stage, a convolutional neural network (CNN) is learned in a multi-instance learning fashion to extract the most discriminative and and non-informative local patches from the training slices. In the boosting stage, the pre-learned CNN is further boosted by these local patches for image classification. The CNN learned by exploiting the discriminative local appearances becomes more accurate than those learned from global image context. The key hallmark of our method is that it automatically discovers the discriminative and non-informative local patches through multi-instance deep learning. Thus, no manual annotation is required. Our method is validated on a synthetic dataset and a large scale CT dataset. It achieves better performances than state-of-the-art approaches, including the standard deep CNN.


IEEE Transactions on Image Processing | 2008

Wavelet-Based Joint Estimation and Encoding of Depth-Image-Based Representations for Free-Viewpoint Rendering

Matthieu Maitre; Yoshihisa Shinagawa; Minh N. Do

We propose a wavelet-based codec for the static depth-image-based representation, which allows viewers to freely choose the viewpoint. The proposed codec jointly estimates and encodes the unknown depth map from multiple views using a novel rate-distortion (RD) optimization scheme. The rate constraint reduces the ambiguity of depth estimation by favoring piece- wise-smooth depth maps. The optimization is efficiently solved by a novel dynamic programming along trees of integer wavelet coefficients. The codec encodes the image and the depth map jointly to decrease their redundancy and to provide a RD-optimized bitrate allocation between the two. The codec also offers scalability both in resolution and in quality. Experiments on real data show the effectiveness of the proposed codec.


computer vision and pattern recognition | 2010

Stratified learning of local anatomical context for lung nodules in CT images

Dijia Wu; Le Lu; Jinbo Bi; Yoshihisa Shinagawa; Kim L. Boyer; Arun Krishnan; Marcos Salganicoff

The automatic detection of lung nodules attached to other pulmonary structures is a useful yet challenging task in lung CAD systems. In this paper, we propose a stratified statistical learning approach to recognize whether a candidate nodule detected in CT images connects to any of three other major lung anatomies, namely vessel, fissure and lung wall, or is solitary with background parenchyma. First, we develop a fully automated voxel-by-voxel labeling/segmentation method of nodule, vessel, fissure, lung wall and parenchyma given a 3D lung image, via a unified feature set and classifier under conditional random field. Second, the generated Class Probability Response Maps (PRM) by voxel-level classifiers, are used to form the so-called pairwise Probability Co-occurrence Maps (PCM) which encode the spatial contextual correlations of the candidate nodule, in relation to other anatomical landmarks. Based on PCMs, higher level classifiers are trained to recognize whether the nodule touches other pulmonary structures, as a multi-label problem. We also present a new iterative fissure structure enhancement filter with superior performance. For experimental validation, we create an annotated database of 784 subvolumes with nodules of various sizes, shapes, densities and contextual anatomies, and from 239 patients. High accuracy of multi-class voxel labeling is achieved 89.3% ∼ 91.2%. The Area under ROC Curve (AUC) of vessel, fissure and lung wall connectivity classification reaches 0.8676, 0.8692 and 0.9275, respectively.


computer vision and pattern recognition | 2008

Symmetric multi-view stereo reconstruction from planar camera arrays

Matthieu Maitre; Yoshihisa Shinagawa; Minh N. Do

We present a novel stereo algorithm which performs surface reconstruction from planar camera arrays. It incorporates the merits of both generic camera arrays and rectified binocular setups, recovering large surfaces like the former and performing efficient computations like the latter. First, we introduce a rectification algorithm which gives freedom in the design of camera arrays and simplifies photometric and geometric computations. We then define a novel set of data-fusion functions over 4-neighborhoods of cameras, which treat all cameras symmetrically and enable standard binocular stereo algorithms to handle arrays with arbitrary number of cameras. In particular, we introduce a photometric fusion function which handles partial visibility and extracts depth information along both horizontal and vertical baselines. Finally, we show that layered depth images and sprites with depth can be efficiently extracted from the rectified 3D space. Experimental results on real images confirm the effectiveness of the proposed method, which reconstructs dense surfaces larger by 20% on Tsukuba.


Proceedings of SPIE | 2014

Multi-view learning based robust collimation detection in digital radiographs

Hongda Mao; Zhigang Peng; Yoshihisa Shinagawa; Yiqiang Zhan; Xiang Sean Zhou

In X-ray examinations, it is essential that radiographers carefully use collimation to the appropriate anatomy of interest to minimize the overall integral dose to the patient. The shadow regions are not diagnostically meaningful and could impair the overall image quality. Thus, it is desirable to detect the collimation and exclude the shadow regions to optimize image display. However, due to the large variability of collimated images, collimation detection remains a challenging task. In this paper, we consider a region of interest (ROI) in an image, such as the collimation, can be described by two distinct views, a cluster of pixels within the ROI and the corners of the ROI. Based on this observation, we propose a robust multi-view learning based strategy for collimation detection in digital radiography. Specifically, one view is from random forests learning based region detector, which provides pixel-wise image classification and each pixel is labeled as either in-collimation or out-of-collimation. The other view is from a discriminative, learning-based landmark detector, which detects the corners and localizes the collimation within the image. Nevertheless, given the huge variability of the collimated images, the detection from either view alone may not be perfect. Therefore, we adopt an adaptive view fusing step to obtain the final detection by combining region and corner detection. We evaluate our algorithm in a database with 665 X-ray images in a wide variety of types and dosages and obtain a high detection accuracy (95%), compared with using region detector alone (87%) and landmark detector alone (83%).


international symposium on biomedical imaging | 2014

Saliency-based rotation invariant descriptor for wrist detection in whole body CT images.

Mingchen Gao; Yiqiang Zhan; Gerardo Hermosillo; Yoshihisa Shinagawa; Dimitris N. Metaxas; Xiang Sean Zhou

In this paper, we propose a saliency-based rotation invariant descriptor and apply it to detect wrists in CT images. The descriptor is motivated by the observation that salient landmarks around wrists usually form a characteristic spatial configuration (Fig. 1). In our framework, a set of interest points are firstly computed via scale-space analysis. For each interest point, we compute a pyramid of scale-distance 2D histograms constructed with neighboring interest points. The descriptor represents the spatial configuration among neighboring interest points in a rotation-invariant fashion. A cascade set of random forests are trained to distinguish wrist from other anatomies using this descriptor. Our algorithm shows robust and accurate performance on 41 whole body CT scans with diverse context, orientations and articulation configurations.


Proceedings of SPIE | 2009

Physical priors in virtual colonoscopy

Hassan Rivaz; Yoshihisa Shinagawa; Jianming Liang

Electronic colon cleansing (ECC) aims to remove the contrast agent from the CT abdominal images so that a virtual model of the colon can be constructed. Virtual colonoscopy requires either liquid or solid preparation of the colon before CT imaging. This paper has two parts to address ECC in both preparation methods. In the first part, meniscus removal in the liquid preparation is studied. The meniscus is the curve seen at the top of a liquid in response to its container. Left on the colon wall, the meniscus can decrease the sensitivity and specificity of virtual colonoscopy. We state the differential equation that governs the profile of the meniscus and propose an algorithm for calculating the boundary of the contrast agent. We compute the surface tension of the liquid-colon wall contact using in-vivo CT data. Our results show that the surface tension can be estimated with an acceptable degree of uncertainty. Such an estimate, along with the meniscus profile differential equation will be used as an a priori knowledge to aid meniscus segmentation. In the second part, we study ECC in solid preparation of colon. Since the colon is pressurized with air before acquisition of the CT images, a prior on the shape of the colon wall can be obtained. We present such prior and investigate it using patient data. We show the shape prior is held in certain parts of the colon and propose a method that uses this prior to ease pseudoenhancement correction.


Proceedings of SPIE | 2014

Sparse appearance model-based algorithm for automatic segmentation and identification of articulated hand bones

Fitsum A. Reda; Zhigang Peng; Shu Liao; Yoshihisa Shinagawa; Yiqiang Zhan; Gerardo Hermosillo; Xiang Sean Zhou

Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.


international conference on computer vision | 2009

Untangling fibers by quotient appearance manifold mapping for grayscale shape classification

Yoshihisa Shinagawa; Yuping Lin

Appearance manifolds have been one of the most powerful methods for object recognition. However, they could not be used for grayscale shape classification, particularly in three dimensions, such as classifying medical lesion volumes or galaxy images. The main cause of the difficulty is that the appearance manifolds of shape classes have entangled fibers in their embedded Euclidean space. This paper proposes a novel appearance-based method called the quotient appearance manifold mapping to untangle the fibers of the appearance manifolds. First, the quotient manifold is constructed to untangle the fiber bundles of appearance manifolds. The mapping from each point of the manifold to the quotient submanifold is then proposed to classify grayscale shapes. We show the effectiveness in grayscale 3D shape recognition using medical images.


Archive | 2010

Multi-level contextual learning of data

Dijia Wu; Le Lu; Jinbo Bi; Yoshihisa Shinagawa; Marcos Salganicoff

Collaboration


Dive into the Yoshihisa Shinagawa's collaboration.

Researchain Logo
Decentralizing Knowledge