Che-Yen Wen
Central Police University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Che-Yen Wen.
Textile Research Journal | 2001
Che-Yen Wen; Shih-Hsuan Chiu; Wei-Sheng Hsu; Gea-Hau Hsu
Defect segmentation of complicated textures is a challenging problem in automatic inspection. In this paper, we use wavelet transform (WT) and a co-occurrence matrix (CM) to extract features of texture images, then use those features to locate defects on textile fabrics. From the experimental results, we obtain a 92% accuracy rate when determining if the inspected image is with or without defects and an 84% accuracy rate when locating the defect position in an image with defects. We also find that the methods performance is invariant under geometric transformation. This method can be extensively applied to automatic surface defect inspection of other materials such as wood and metal.
Textile Research Journal | 2002
Shih-Hsuan Chiu; Shen Chou; Jiun-Jian Liaw; Che-Yen Wen
When automatically inspecting textured surface defects, the most important step is to segment the defects from the background. For complicated textures, however, defect segmentation is still a challenging problem. In this paper, we use a Fourier-domain maximum likelihood estimator (FDMLE) based on the fractional Brownian motion (FBM) model to inspect surface defects of textile fabrics. From the experiments, we obtain good results for defect segmentation, and find the methods performance is invariant under geometric transformation.
The Journal of Allergy and Clinical Immunology | 2009
Chien-Han Chen; Yu-Tsan Lin; Che-Yen Wen; Li-Chieh Wang; Kuo-Hung Lin; Shih-Hsuan Chiu; Yao-Hsu Yang; Jyh-Hong Lee; Bor-Luen Chiang
BACKGROUND The knowledge on allergic shiners is extremely limited. A conceivable tool able to quantify allergic shiners has not been established. OBJECTIVES We sought to determine the significance and changeability of allergic shiners through our newly developed computerized method. METHODS We developed a novel computerized method to measure allergic shiners and enrolled a cohort of children with or without allergic rhinitis. Children with allergic rhinitis were prospectively assessed. A standardized digital photograph was taken during each visit, and a modified Pediatric Rhinoconjunctivitis Quality of Life Questionnaire was completed. Subject global assessment for nose symptoms and subject global assessment for eye symptoms (SGAE) were self-recorded daily. RESULTS We included 126 children with allergic rhinitis and 123 healthy control subjects. One hundred three (82%) participants with allergic rhinitis completed at least 4 prospective assessments. Shiners were darker (P < .001) and larger (P < .001) in children with allergic rhinitis. Darkness and sizes of allergic shiners were paradoxically inversely correlated (P = .02). Darkness of allergic shiners positively correlated with the duration of allergic rhinitis, practical problem scores, and SGAE values (P = .02, P = .004, and P = .002, respectively), but sizes of allergic shiners did not. Shiners were found to be darker in children with scores of eye symptoms of greater than 6, scores of practical problems of greater than 5, and SGAE values of greater than 0 (P = .02, P < .001, and P = .003, respectively), whereas shiners were larger in children with scores of other symptoms of greater than 9 and activity limitations of greater than 4 (P = .02 and P = .002, respectively). CONCLUSION Computer-analyzed allergic shiners correlate with the chronicity and severity of allergic rhinitis.
Textile Research Journal | 2001
Shih-Hsuan Chiu; Hung-Ming Chen; Jyh-Yeow Chen; Che-Yen Wen
Traditionally, the quality grades of false twist yarn (FTY) packages are classified by human inspection, but the result may be affected by personal, subjective factors. In this paper, we extract the defect features of FTY packages, such as size, discoloration, formation, and cross-over, with image processing technology and use neural networks to classify the quality grades of FTY packages. From the experimental results, we can obtain a classifying rate of about 90%.
Pattern Recognition Letters | 1998
Che-Yen Wen; Raj S. Acharya
A Maximum Likelihood Estimator (MLE) has been applied to estimating the Hurst parameter H on a self-similar texture image. Much of the work done so far has concentrated on the spatial domain. In this paper, we propose an approximate MLE method for estimating H in the Fourier domain. This method saves computational time and can be applied to estimating the parameter H directly from the Fourier-domain raw data collected by the Magnetic Resonance Imaging (MRI) scanner. We use synthetic fractal datasets and a human tibia image to study the performance of our method.
intelligence and security informatics | 2008
Wen-Chao Yang; Che-Yen Wen; Chung-Hao Chen
The traditional verifying evidence method in court is to check the integrity of the chain of custody of evidence. However, since the digital image can be easily transferred by Internet, it is not easy for us to keep the integrity of the chain of custody. In this article, we use the PKI (Public-Key Infrastructure), Public-Key Cryptography and watermark techniques to design a novel testing and verifying method of digital images. The main strategy of the article is to embed encryption watermarks in the least significant bit (LSB) of digital images. With the designed method, we can check the integrity of digital images by correcting public-key without side information and protecting the watermarks without tampering or forging, even the embedded method is open. Finally the proposed method can be applied in court to digital evidence testing and verification, and used to check the admissibility of digital image.
Journal of Forensic Sciences | 2006
Shih-Hsuan Chiu; Chuan-Pin Lu; Che-Yen Wen
ABSTRACT: Closed‐circuit television (CCTV) security systems have been widely used in banks, convenience stores, and other facilities. They are useful to deter crime and depict criminal activity. However, CCTV cameras that provide an overview of a monitored region can be useful for criminal investigation but sometimes can also be used for object identification (e.g., vehicle numbers, persons, etc.). In this paper, we propose a framework for improving the image quality of CCTV security systems. This framework is based upon motion detection technology. There are two cameras in the framework: one camera (camera A) is fixed focus with a zoom lens for moving‐object detection, and the other one (camera B) is variable focus with an auto‐zoom lens to capture higher resolution images of the objects of interest. When camera A detects a moving object in the monitored area, camera B, driven by an auto‐zoom focus control algorithm, will take a higher resolution image of the object of interest. Experimental results show that the proposed framework can improve the likelihood that images obtained from stationary unattended CCTV cameras are sufficient to enable law enforcement officials to identify suspects and other objects of interest.
intelligence and security informatics | 2008
Chung-Hao Chen; TeYu Chien; Wen-Chao Yang; Che-Yen Wen
Linear motion and out-of-focus blur often coexist in a surveillance system, which degrade the quality of acquired images and thus complicate the task of object recognition and event detection. In this work, we present a point spread function-based (PSF-based) approach considering fundamental characteristics of linear motion and out-of-focus blur based on geometric optics to restore coexisting motion and out-of-focus blurred images without application-dependent parameters selection, where a sharpness measure is employed as a cost function to automatically select optimal parameter values for PSF. To verify the effectiveness of our proposed approach, we compare our approach with existing de-blur approaches. Experimental results shows our proposed method can automatically select optimal parameter values for PSF and outperform existing de-blur approaches.
Pattern Recognition Letters | 2008
Shih-Hsuan Chiu; Chuan-Pin Lu; Dien‐Chi Wu; Che-Yen Wen
This paper proposes a histogram based data-reducing algorithm for improving the performance of the fixed-point independent component analysis (FastICA). This data-reducing independent component analysis (DR-FastICA) is based upon two statistical criteria to keep the histogram contour of processed data. This algorithm uses two steps (a coarse step for data sampling and a fine one for data tuning) to improve the performance of FastICA. Experimental results show that the proposed algorithm can reduce the computation time and implementation memory needed for executing FastICA, especially for large amounts of data (e.g. 1024x1024 images).
Journal of remote sensing | 2011
Chin-Hsiang Luo; Kuo-Hung Lin; Che-Yen Wen; Shih-Hsuan Chiu; Chung-Shin Yuan; Shinhao Yang
This study describes some preliminary results of a new approach which seeks to develop a method by which uneven decay of atmosphere can be described by the fluctuation of a degradation parameter, k, extracted from online recorded images. The proposed processor is a combination of the empirical model for atmospheric non-homogeneity and an image degradation method. Estimation of the other parameter, C ave, derived from k values was an attempt to quantify the blurred level of atmospheric visibility according to the full-scale image computation. The C ave of code A–E images ranged from 0.437 to 0.831, and the related visual range observed by investigators was from 14.1 to 3.0 km, respectively. The standard deviation of C ave reveals that non-homogeneous degradation of blurring atmosphere happens. Low visibility related with a small visual range and a degraded image is companied by a large C ave and inherits high variation from heaving k values. Because of fluctuation and full-scale image representation, C ave is more meaningful and sensitive for atmospheric decay measurement than the prevailing visibility equal to the distance at which the farthest target can be recognized. Finally, a field test was applied to confirm a good correlation between observed visual range and two parameters (k and C ave).