Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chun-Nan Chou is active.

Publication


Featured researches published by Chun-Nan Chou.


international conference of the ieee engineering in medicine and biology society | 2015

Transfer representation learning for medical image analysis

Chuen-Kai Shie; Chung-Hisang Chuang; Chun-Nan Chou; Meng-Hsi Wu; Edward Y. Chang

There are two major challenges to overcome when developing a classifier to perform automatic disease diagnosis. First, the amount of labeled medical data is typically very limited, and a classifier cannot be effectively trained to attain high disease-detection accuracy. Second, medical domain knowledge is required to identify representative features in data for detecting a target disease. Most computer scientists and statisticians do not have such domain knowledge. In this work, we show that employing transfer learning can remedy both problems. We use Otitis Media (OM) to conduct our case study. Instead of using domain knowledge to extract features from labeled OM images, we construct features based on a dataset entirely OM-irrelevant. More specifically, we first learn a codebook in an unsupervised way from 15 million images collected from ImageNet. The codebook gives us what the encoders consider being the fundamental elements of those 15 million images. We then encode OM images using the codebook and obtain a weighting vector for each OM image. Using the resulting weighting vectors as the feature vectors of the OM images, we employ a traditional supervised learning algorithm to train an OM classifier. The achieved detection accuracy is 88.5% (89.63% in sensitivity and 86.9% in specificity), markedly higher than all previous attempts, which relied on domain experts to help extract features.


computer vision and pattern recognition | 2017

CLKN: Cascaded Lucas-Kanade Networks for Image Alignment

Che-Han Chang; Chun-Nan Chou; Edward Y. Chang

This paper proposes a data-driven approach for image alignment. Our main contribution is a novel network architecture that combines the strengths of convolutional neural networks (CNNs) and the Lucas-Kanade algorithm. The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps. To train our network, we develop a cascaded feature learning method that incorporates the coarse-to-fine strategy into the training process. This method learns a pyramid representation of convolutional features in a cascaded manner and yields a cascaded network that performs coarse-to-fine alignment on the feature pyramids. We apply our model to the task of homography estimation, and perform training and evaluation on a large labeled dataset generated from the MS-COCO dataset. Experimental results show that the proposed approach significantly outperforms the other methods.


advanced data mining and applications | 2017

Distributed Training Large-Scale Deep Architectures

Shang-Xuan Zou; Chun-Yen Chen; Jui-Lin Wu; Chun-Nan Chou; Chia-Chin Tsao; Kuan-Chieh Tung; Ting-Wei Lin; Cheng-lung Sung; Edward Y. Chang

Scale of data and scale of computation infrastructures together enable the current deep learning renaissance. However, training large-scale deep architectures demands both algorithmic improvement and careful system configuration. In this paper, we focus on employing the system approach to speed up large-scale training. Via lessons learned from our routine benchmarking effort, we first identify bottlenecks and overheads that hinter data parallelism. We then devise guidelines that help practitioners to configure an effective system and fine-tune parameters to achieve desired speedup. Specifically, we develop a procedure for setting minibatch size and choosing computation algorithms. We also derive lemmas for determining the quantity of key components such as the number of GPUs and parameter servers. Experiments and examples show that these guidelines help effectively speed up large-scale deep learning training.


Proceedings of the 2nd International Workshop on Multimedia for Personal Health and Health Care | 2017

Artificial Intelligence in XPRIZE DeepQ Tricorder

Edward Y. Chang; Meng-Hsi Wu; Kai-Fu Tang Tang; Hao-Cheng Kao; Chun-Nan Chou

The DeepQ tricorder device developed by HTC from 2013 to 2016 was entered in the Qualcomm Tricorder XPRIZE competition and awarded the second prize in April 2017. This paper presents DeepQ»s three modules powered by artificial intelligence: symptom checker, optical sense, and vital sense. We depict both their initial design and ongoing enhancements.


arXiv: Learning | 2018

Backward Reduction of CNN Models with Information Flow Analysis.

Yu-Hsun Lin; Chun-Nan Chou; Edward Y. Chang


arXiv: Learning | 2018

MBS: Macroblock Scaling for CNN Model Reduction.

Yu-Hsun Lin; Chun-Nan Chou; Edward Y. Chang


arXiv: Learning | 2018

BRIEF: Backward Reduction of CNNs with Information Flow Analysis.

Yu-Hsun Lin; Chun-Nan Chou; Edward Y. Chang


arXiv: Learning | 2018

BDA-PCH: Block-Diagonal Approximation of Positive-Curvature Hessian for Training Neural Networks.

Sheng-Wei Chen; Chun-Nan Chou; Edward Y. Chang


Archive | 2018

MEDICAL SYSTEM AND METHOD FOR PROVIDING MEDICAL PREDICTION

Kai-Fu Tang; Hao-Cheng Kao; Chun-Nan Chou; Edward Y. Chang; Chih-wei Cheng; Ting-jung Chang; Shan-yi Yu; Tsung-hsiang Liu; Cheng-lung Sung; Chieh-hsin Yeh


arXiv: Computer Vision and Pattern Recognition | 2017

Representation Learning on Large and Small Data.

Chun-Nan Chou; Chuen-Kai Shie; Fu-Chieh Chang; Jocelyn Chang; Edward Y. Chang

Collaboration


Dive into the Chun-Nan Chou's collaboration.

Researchain Logo
Decentralizing Knowledge