Ou-Yang Mang
National Chiao Tung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ou-Yang Mang.
Proceedings of SPIE | 2011
Wei-De Jeng; Ou-Yang Mang; Yu-Ta Chen; Ying-Yi Wu
This paper is researching about the illumination system in ring field capsule endoscope. It is difficult to obtain the uniform illumination on the observed object because the light intensity of LED will be changed along its angular displacement and same as luminous intensity distribution curve. So we use the optical design software which is Advanced Systems Analysis Program (ASAP) to build a photometric model for the optimal design of LED illumination system in ring field capsule endoscope. In this paper, the optimal design of illumination uniformity in the ring field capsule endoscope is from origin 0.128 up to optimum 0.603 and it would advance the image quality of ring field capsule endoscope greatly.
Proceedings of SPIE | 2011
Yao Fang Hsieh; Ou-Yang Mang; Jin Chern Chiou; Yung Jiun Lin; Ming Hsui Tsai; Da Tian Bau; Chang Fang Chiu; Guan Chin Teseng; Nai Wen Chang; Wen Chung Kao; Shun De Wu
Until now, the cancer was examined by diagnosing the pathological changes of tumor. If the examination of cancer can diagnose the tumor before the cell occur the pathological changes, the cure rate of cancer will increase. This research develops a human-machine interface for hyper-spectral microscope. The hyper-spectral microscope can scan the specific area of cell and records the data of spectrum and intensity. These data is helpful to diagnose tumor. This research aims to develop a new system and a human-machine interface to control the hyper-spectral microscope. The interface can control the moving speed of motor, the exposure-time of hyper-spectrum, real-time focus, image of fluorescence, and record the data of spectral intensity and position.
Proceedings of SPIE | 2016
Ou-Yang Mang; Mei-Lan Ko; Yi-Chun Tsai; Jin-Chern Chiou; Ting-Wei Huang
The pupil response to light can reflect various kinds of diseases which are related to physiological health. Pupillary abnormalities may be influenced on people by autonomic neuropathy, glaucoma, diabetes, genetic diseases, and high myopia. In the early stage of neuropathy, it is often asymptomatic and difficulty detectable by ophthalmologists. In addition, the position of injured nerve can lead to unsynchronized pupil response for human eyes. In our study, we design the pupilometer to measure the binocular pupil response simultaneously. It uses the different wavelength of LEDs such as white, red, green and blue light to stimulate the pupil and record the process. Therefore, the pupilometer mainly contains two systems. One is the image acquisition system, it use the two cameras modules with the same external triggered signal to capture the images of the pupil simultaneously. The other one is the illumination system. It use the boost converter ICs and LED driver ICs to supply the constant current for LED to maintain the consistent luminance in each experiments for reduced experimental error. Furthermore, the four infrared LEDs are arranged nearby the stimulating LEDs to illuminate eyes and increase contrast of image for image processing. In our design, we success to implement the function of synchronized image acquisition with the sample speed in 30 fps and the stable illumination system for precise measurement of experiment.
Proceedings of SPIE | 2016
Ting Wei Huang; Nai Lun Cheng; Ming Hsui Tsai; Jin Chern Chiou; Ou-Yang Mang
Oral cancer is one of the serious and growing problem in many developing and developed countries. The simple oral visual screening by clinician can reduce 37,000 oral cancer deaths annually worldwide. However, the conventional oral examination with the visual inspection and the palpation of oral lesions is not an objective and reliable approach for oral cancer diagnosis, and it may cause the delayed hospital treatment for the patients of oral cancer or leads to the oral cancer out of control in the late stage. Therefore, a device for oral cancer detection are developed for early diagnosis and treatment. A portable LED Induced autofluorescence (LIAF) imager is developed by our group. It contained the multiple wavelength of LED excitation light and the rotary filter ring of eight channels to capture ex-vivo oral tissue autofluorescence images. The advantages of LIAF imager compared to other devices for oral cancer diagnosis are that LIAF imager has a probe of L shape for fixing the object distance, protecting the effect of ambient light, and observing the blind spot in the deep port between the gumsgingiva and the lining of the mouth. Besides, the multiple excitation of LED light source can induce multiple autofluorescence, and LIAF imager with the rotary filter ring of eight channels can detect the spectral images of multiple narrow bands. The prototype of a portable LIAF imager is applied in the clinical trials for some cases in Taiwan, and the images of the clinical trial with the specific excitation show the significant differences between normal tissue and oral tissue under these cases.
Proceedings of SPIE | 2015
Ting-Wei Huang; Yu-Cheng Lee; Nai-Lun Cheng; Yung-Jhe Yan; Hou-Chi Chiang; Jin-Chern Chiou; Ou-Yang Mang
The difference of spectral distribution between lesions of epithelial cells and normal cells after excited fluorescence is one of methods for the cancer diagnosis. In our previous work, we developed a portable LED Induced autofluorescence (LIAF) imager contained the multiple wavelength of LED excitation light and multiple filters to capture ex-vivo oral tissue autofluorescence images. Our portable system for detection of oral cancer has a probe in front of the lens for fixing the object distance. The shape of the probe is cone, and it is not convenient for doctor to capture the oral image under an appropriate view angle in front of the probe. Therefore, a probe of L shape containing a mirror is proposed for doctors to capture the images with the right angles, and the subjects do not need to open their mouse constrainedly. Besides, a glass plate is placed in probe to prevent the liquid entering in the body, but the light reflected from the glass plate directly causes the light spots inside the images. We set the glass plate in front of LED to avoiding the light spots. When the distance between the glasses plate and the LED model plane is less than the critical value, then we can prevent the light spots caused from the glasses plate. The experiments show that the image captured with the new probe that the glasses plate placed in the back-end of the probe has no light spots inside the image.
Proceedings of SPIE | 2015
Hou-Chi Chiang; Yu-Hsiang Tsai; Yung-Jhe Yan; Ting-Wei Huang; Ou-Yang Mang
In recent years, transparent display is an emerging topic in display technologies. Apply in many fields just like mobile device, shopping or advertising window, and etc. Electrowetting Display (EWD) is one kind of potential transparent display technology advantages of high transmittance, fast response time, high contrast and rich color with pigment based oil system. In mass production process of Electrowetting Display, oil defects should be found by Automated Optical Inspection (AOI) detection system. It is useful in determination of panel defects for quality control. According to the research of our group, we proposed a mechanism of AOI detection system detecting the different kinds of oil defects. This mechanism can detect different kinds of oil defect caused by oil overflow or material deteriorated after oil coating or driving. We had experiment our mechanism with a 6-inch Electrowetting Display panel from ITRI, using an Epson V750 scanner with 1200 dpi resolution. Two AOI algorithms were developed, which were high speed method and high precision method. In high precision method, oil jumping or non-recovered can be detected successfully. This mechanism of AOI detection system can be used to evaluate the oil uniformity in EWD panel process. In the future, our AOI detection system can be used in quality control of panel manufacturing for mass production.
Proceedings of SPIE | 2012
Yao Fang Hsieh; Chih Hsien Chen; Ou-Yang Mang; Jeng Ren Duann; Jin Chern Chiou; Shun De Wu; Yong Jiun Lin; Ming Hsui Tsai; Nai Wen Chang
Currently, the cancer was examined by diagnosing the pathological changes of tumor. If the examination of cancer can diagnose the tumor before the cell occur the pathological changes, the cure rate of cancer will increase. This research develops a human-machine interface for hyper-spectral microscope. The hyper-spectral microscope can scan the specific area of cell and records the data of spectrum and intensity. These data is helpful to diagnose tumor. This study finds the hyper-spectral imaging have two higher intensity points at 550nm and 700nm, and one lower point at 640nm between the two higher points. For analyzing the hyper-spectral imaging, the intensity at the 550nm peak divided by the intensity at 700nm peak. Finally, we determine the accuracy of detection by Gaussian distribution. The accuracy of detecting normal cells achieves 89%, and the accuracy of cancer cells achieves 81%.
Proceedings of SPIE | 2011
Wei-De Jeng; Ou-Yang Mang; Chien-Cheng Lai; Hsien-Ming Wu
This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.
Proceedings of SPIE | 2010
Yu-Ta Chen; Shau-Wei Hsu; Bao-Jen Pong; Ou-Yang Mang
Several estimative factors of image quality have been developed for approaching the human perception objectively1-3. We propose to take systematically distorted videos into the estimative factors and analyze the relationship between them. Several types of noise and noise weight were took into COSME standard video and verified the image quality estimative factors which were MSE (Mean Square Error), SSIM (Structural SIMilarity), CWSSIM (Complex Wavelet SSIM), PQR (Picture Quality Ratings) and DVQ (Digital Video Quality). The noise includes white noise, blur and luminance...etc. In the results, CWSSIM index has higher sensitivity at image structure and it could estimate the distorted videos which have the same noise type at the different levels. PQR is similar to CWSSIM, but the ratings of distribution were banded together; SSIM index divides the noise types into two groups and DVQ has linear relationship with MSE in the logarithmic scale.
Proceedings of SPIE | 2010
Yao-Fang Hsieh; Ting-Wei Huang; Ou-Yang Mang; Yi-Ting Kuob
Generally, the instrument of color measurement can be divided into spectrophotometer and color meter. The former instrument use prism or grating to separate the light, it can achieve high accuracy but a higher price. The latter instrument use color filter, however there is no spectrum information with it. This article establishes a color measuring system and uses eigen-spectrum method in double light sources to calibrate the spectrum. The measuring system includes tri-stimulus sensors which were made by color filter. The tungsten lamp and Xenon lamp are used to be light source. The advantage of this measuring system is the higher accuracy and the lower cost. The eigen-spectrum method can calibrate the spectrum in less eigenvector. This method used singular value deposition to obtain basis function of spectrum set, which can be obtained by measuring. Because the range of the spectrum set was 380nm to 780nm, the eigenvector per nanometer from 380nm to 780nm can be obtained. In general, the color spectrum can be obtained with less eigenvector. The color difference in L*a*b* color space from 31.2398 down to 2.48841, and reconstructs the spectrum information.