Chuan-Yu Chang
National Cheng Kung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chuan-Yu Chang.
Image and Vision Computing | 2001
Chuan-Yu Chang; Pau-Choo Chung
Abstract Neural-network-based image techniques such as the Hopfield neural networks have been proposed as an alternative approach for image segmentation and have demonstrated benefits over traditional algorithms. However, due to its architecture limitation, image segmentation using traditional Hopfield neural networks results in the same function as thresholding of image histograms. With this technique high-level contextual information cannot be incorporated into the segmentation procedure. As a result, although the traditional Hopfield neural network was capable of segmenting noiseless images, it lacks the capability of noise robustness. In this paper, an innovative Hopfield neural network, called contextual-constraint-based Hopfield neural cube (CCBHNC) is proposed for image segmentation. The CCBHNC uses a three-dimensional architecture with pixel classification implemented on its third dimension. With the three-dimensional architecture, the network is capable of taking into account each pixels feature and its surrounding contextual information. Besides the network architecture, the CCBHNC also differs from the original Hopfield neural network in that a competitive winner-take-all mechanism is imposed in the evolution of the network. The winner-take-all mechanism adeptly precludes the necessity of determining the values for the weighting factors for the hard constraints in the energy function in maintaining feasible results. The proposed CCBHNC approach for image segmentation has been compared with two existing methods. The simulation results indicate that CCBHNC can produce more continuous, and smoother images in comparison with the other methods.
broadband and wireless computing, communication and applications | 2010
Chuan-Yu Chang; Shang-Cheng Li; Pau-Choo Chung; Jui-Yi Kuo; Yung-Chin Tu
Skin analysis is one of the most important procedures before medical cosmetology. Most conventional skin analysis systems are semi-automatic. They often require human intervention. In this study, an automatic facial skin defect detection approach is proposed. The system first detects human face in the facial image. Based on the detected face, facial features are extracted to locate regions of interest. Then, a pattern recognition approach is applied to detect facial skin defects, such as spots and wrinkles, in the regions of interest. For a specific kind of defect, a classifier is designed to provide higher performance for recognition. Using few features extracted from the region of interest, the proposed approach can successfully detect the skin defects. Experimental results demonstrate effectiveness of the proposed approach.
IEEE Transactions on Nuclear Science | 2002
Chuan-Yu Chang; Pau-Choo Chung; Ping-Hong Lai
Gadolinium (Gd)-enhanced magnetic resonance imaging (MRI) is widely used in the detection of recurrent nasal tumors. We have developed a spatiotemporal neural network (STNN) for identifying the tumor and fibrosis in the nasal regions. A more accurate signal-time curve called relative intensity change (RIC) for dynamic MR images is proposed as representation of gadolinium-enhanced MRI temporal information. The RIC curves of different diseases are embedded into the STNN and stored in the synaptic weights of the input layer through learning. In addition, to enhance the capability of the STNN in discriminating temporal information between tumors and fibrosis, the synaptic weights of its tap delays were obtained through a creative learning scheme, which reinforces the most distinguishable features, between tumor and fibrosis while inhibiting the indistinguishable features. The outputs of proposed STNN were indexed on a colormap in which red represents tumor and green represents fibrosis. The color-coded tumor/fibrosis areas are fused to the original MR image to facilitate visual interpretation. The experimental results show that the proposed method is able to detect abnormal tissues precisely.
ambient intelligence | 2012
Pau-Choo Chung; Bernadette Bouchon-Meunier; Chuan-Yu Chang
In psychology, emotion that reflects the genuine inner feeling of a person experiencing encounters is shown to be a major index in the evaluation of cognition, behavior, and social skills. Emotion is also one source controlling the learning efficiency in education, and one of essential elements in realizing smart interaction for a computer. Thus, emotional intelligence, which is defined as the capability to be able to perceive, to assess and to manage emotions of one’s self or others emotion is attracting its attention in psychology cognition, education, and machine intelligence for achieving ambient intelligence. Understanding one’s emotion can be achieved from several observations, including facial expression, voice expression, and physiological signals. Emotion revealing is affected by culture and personal characteristics; therefore, detection/understanding human emotion is highly challenging. On the other hand, how and why emotion affects human mental states and reactions is still a mystery. All of these are major issues for realizing an emotional intelligence smart environment. The goal of this special issue is to provide a forum for bringing the experts from cross disciplinary to address the emerging topic of emotional intelligence. After rigorous reviewing and accurate revision, seven papers were selected for publication. Tan et al. applied two bipolar facial electromyography (EMG) channels over corrugator supercilii and zygomaticus for differentiating the emotional states visual stimuli in the valence arousal dimensions. Experimental results show that corrugator EMG and zygomaticus EMG efficiently differentiated negative and positive emotions. Lee et al. developed a regularized discriminant analysis (RDA)-based boosting algorithm, and applied it on the facial emotion recognition. The small sample size and ill-posed problems suffered from QDA and LDA was resolved in the paper through a regularization technique. They also used a particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Lin et al. constructed an extensible lexicon and use semantic clues to analyze the emotions of sentences posted on the Plurk website. A support vector machine is applied to classify the emotions. Cerezo et al. proposed a facial affect recognizer to sense emotions from users’ facial image. Five classifiers were integrated to identify emotions. In addition, a Kalman filtering technique was applied to ensure the temporal consistency and increase the robustness. Chi et al. investigates a clean train/noisy test scenario to simulate practical conditions with unknown noisy sources. They extracted statistics of joint spectro-temporal modulation features from an auditory perceptual model for the detection of the emotion status of the speech samples which are corrupted with white and babble noise under various SNR levels. Ana et al. built a virtual pet by using the Ortony, Clore and Collins’ (OCC) theory to implement a cognitive structure. The methodology starts from developing a Behavior Cognitive Task Analysis (BCTA) to elucidate the components necessary to simulate behaviors and mental models of virtual pets. In particular, the Fuzzy C-Mean (FCM) is also proposed to map interaction between elements in the emotion model. Fu et al. proposed a TVP.-C. Chung (&) National Cheng Kung University, Taiwan, R.O.C. e-mail: [email protected]
systems man and cybernetics | 2003
Pau-Choo Chung; Chuan-Yu Chang; Woei-Chyn Chu; Hsiu-Chen Lin
Compared to object-based registration, feature-based registration is much less complex. However, in order for feature-based registration to work, the two image stacks under consideration must have the same acquisition tilt angle and the same anatomical location - two requirements that are not always fulfilled. In this paper, we propose a technique that reconstructs two sets of medical images acquired with different acquisition angles and anatomical cross sections into one set of images of identical scanning orientation and positions. The space correlation information among the two image stacks is first extracted and is used to correct the tilt angle and anatomical position differences in the image stacks. Satisfactory reconstruction results were presented to prove our points.
multimedia technology for asia pacific information infrastructure | 1999
Chuan-Yu Chang; Pau-Choo Chung
Proposes a 3-D Hopfield neural network called Contextual-Constraint Based Hopfield Neural Cube (CCBHNC) taking both each single pixels feature and its surrounding contextual information for image segmentation, mimicking a high-level vision system. Different from other neural networks, CCBHNC extends the two-dimensional Hopfield neural network into a three-dimensional Hopfield neural cube for it to easily take each pixels surrounding contextual information into its network operation. As CCBHNC uses a high-level image segmentation model, disconnected fractions arising in the course of tiny details or noises will be effectively removed. Furthermore, the CCBHNC follows the competitive learning rule to update the neuron states, thus precluding the necessity of determining the values for the hard constraints in the energy function, which is usually required in a Hopfield neural network, and facilitating the energy function to converge fast. The simulation results indicate that CCBHNC can produce more continued, more intact, and smoother images in comparison with the other methods.
international conference of the ieee engineering in medicine and biology society | 2005
I. Manousakas; L.R. Wan; Yong-Ren Pu; Chuan-Yu Chang; Shen-Min Liang
In extracorporeal shock wave lithotripsy (ESWL) and radiotherapy, real time tracking of the position of renal stones or tumors is of great importance. When the treatment system incorporates many delay factors, the treated position and the expected position may significantly differ. In this study, linear prediction is used to examine if future values from real-time tracking trajectories can be predicted accurately. The results presented here shows that predicted values can be used for the treatment targeting compensating for the systems delays. The use of up to the third future predicted value introduces less than 5% average error from the actual future positions
international conference of the ieee engineering in medicine and biology society | 2000
Chuan-Yu Chang; Pau-Choo Chung; E-Liang Chen; Wen-Chen Huang; Ping-Hong Lai
The purpose of this paper is to develop an automatic diagnosis system for distinguishing between tumor and fibrosis in the nasal region. The proposed system is composed of a new model, Relative Intensity Change (RIC), for point matching among the consecutive MR image sequence, and a Spatiotemporal Neural Network (STNN) for distinguishing between the tumor and fibrosis. Then, a knowledge-based refinement process is applied for extracting the tumor/fibrosis. A color-code representation of the different abnormal regions are displayed. The experimental results show that the proposed method is able to detect the abnormal tissues precisely.
signal processing systems | 1999
Pau-Choo Chung; Chuan-Yu Chang; Woei-Chyn Chu; Hsiu-Chen Liu
In this paper, we propose a technique that reconstructs two sets of medical images acquired with different acquisition angles and anatomical cross sections into one set of images of identical scanning orientation and positions. The space correlation information among the two image stacks is first extracted and is used to correct the tilt angle and anatomical position differences found in the image stacks. Satisfactory reconstruction results were presented to prove our points.
Journal of The Formosan Medical Association | 2001
Ping-Hong Lai; Jieh-Yuan Li; Chuan-Yu Chang; Ming-Ting Wu; Yuk-Keung Lo; Pau-Choo Chung