Ray-Shine Run
National United University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ray-Shine Run.
Pattern Recognition | 2010
Mingxing He; Shi-Jinn Horng; Pingzhi Fan; Ray-Shine Run; Rong-Jian Chen; Jui-Lin Lai; Muhammad Khurram Khan; Kevin Octavius Sentosa
In a multimodal biometric system, the effective fusion method is necessary for combining information from various single modality systems. In this paper the performance of sum rule-based score level fusion and support vector machines (SVM)-based score level fusion are examined. Three biometric characteristics are considered in this study: fingerprint, face, and finger vein. We also proposed a new robust normalization scheme (Reduction of High-scores Effect normalization) which is derived from min-max normalization scheme. Experiments on four different multimodal databases suggest that integrating the proposed scheme in sum rule-based fusion and SVM-based fusion leads to consistently high accuracy. The performance of simple sum rule-based fusion preceded by our normalization scheme is comparable to another approach, likelihood ratio-based fusion [8] (Nandakumar et al., 2008), which is based on the estimation of matching scores densities. Comparison between experimental results on sum rule-based fusion and SVM-based fusion reveals that the latter could attain better performance than the former, provided that the kernel and its parameters have been carefully selected.
IEEE Transactions on Industrial Informatics | 2010
Yuan-Hsin Chen; Shi-Jinn Horng; Ray-Shine Run; Jui-Lin Lai; Rong-Jian Chen; Wei-Chih Chen; Yi Pan; Terano Takao
Radio frequency identification has been developed and used in many applications in the real world. Due to the shared wireless channel between tags and the reader during communication, the tag collision arbitration is a significant issue for reducing the communication overhead. This paper presents a novel anti-collision algorithm named New Enhanced Anti-Collision Algorithm (NEAA) using counters and stack to reduce the probability of collision efficiently and to make it possible to identify multiple passive tags in a timeslot. The upper bound of total timeslots for identifying N passive tags is first derived in this paper; suppose the length of a tag ID is n, the upper bound of total timeslots for identifying N (N= 2n) passive tags is derived to be 2n-1 - n + 4, when n > 2. This bound is quite tight. Compared to the existing methods proposed by other researchers, the performance evaluation shows that the proposed scheme in this paper consumes fewer timeslots and has better performance for identifying tags.
Expert Systems With Applications | 2011
Mingxing He; Shi-Jinn Horng; Pingzhi Fan; Muhammad Khurram Khan; Ray-Shine Run; Jui-Lin Lai; Rong-Jian Chen; Adi Sutanto
Phishing attack is growing significantly each year and is considered as one of the most dangerous threats in the Internet which may cause people to lose confidence in e-commerce. In this paper, we present a heuristic method to determine whether a webpage is a legitimate or a phishing page. This scheme could detect new phishing pages which black list based anti-phishing tools could not. We first convert a web page into 12 features which are well selected based on the existing normal and fishing pages. A training set of web pages including normal and fishing pages are then input for a support vector machine to do training. A testing set is finally fed into the trained model to do the testing. Compared to the existing methods, the experimental results show that the proposed phishing detector can achieve the high accuracy rate with relatively low false positive and low false negative rates.
Expert Systems With Applications | 2010
Ling-Yuan Hsu; Shi-Jinn Horng; Tzong-Wann Kao; Yuan-Hsin Chen; Ray-Shine Run; Rong-Jian Chen; Jui-Lin Lai; I-Hong Kuo
In this paper, we proposed a modified turbulent particle swarm optimization (named MTPSO) method for the temperature prediction and the Taiwan Futures Exchange (TAIFEX) forecasting, based on the two-factor fuzzy time series and particle swarm optimization. The MTPSO model can be dealt with two main factors easily and accurately, which are the lengths of intervals and the content of forecast rules. The experimental results of the temperature prediction and the TAIFEX forecasting show that the proposed model is better than any existing models and it can get better quality solutions based on the high-order fuzzy time series, respectively.
Expert Systems With Applications | 2010
Tsung-Lieh Lin; Shi-Jinn Horng; Kai-Hui Lee; Pei-Ling Chiu; Tzong-Wann Kao; Yuan-Hsin Chen; Ray-Shine Run; Jui-Lin Lai; Rong-Jian Chen
The main concept of the original visual secret sharing (VSS) scheme is to encrypt a secret image into n meaningless share images. It cannot leak any information of the shared secret by any combination of the n share images except for all of images. The shared secret image can be revealed by printing the share images on transparencies and stacking the transparencies directly, so that the human visual system can recognize the shared secret image without using any devices. The visual secrets sharing scheme for multiple secrets (called VSSM scheme) is intended to encrypt more than one secret image into the same quantity of share images to increase the encryption capacity compared with the original VSS scheme. However, all presented VSSM schemes utilize a pre-defined pattern book with pixel expansion to encrypt secret images into share images. In general, it leads to at least 2x times pixel expansion on the share images by any one of the VSSM schemes. Thus, the pixel expansion problem becomes more serious for sharing multiple secrets. This is neither a practical nor the best solution for increasing the number of secret sharing images. In this paper, we propose a novel VSSM scheme that can share two binary secret images on two rectangular share images with no pixel expansion. The experimental results show that the proposed approach not only has no pixel expansion, but also has an excellent recovery quality for the secret images. As our best knowledge, this is the first approach that can share multiple visual secret images without pixel expansion.
Expert Systems With Applications | 2012
Mahmoud E. Farfoura; Shi-Jinn Horng; Jui-Lin Lai; Ray-Shine Run; Rong-Jian Chen; Muhammad Khurram Khan
Highlights? An authentication protocol is designed for a reversible watermarking using time-stamp protocol. ? The prediction-error expansion on integers technique is used to achieve reversibility. ? The watermark is detected successfully even most of watermarked relation tuples are deleted. Digital watermarking technology has been adopted lately as an effective solution to protecting the copyright of digital assets from illicit copying. Reversible watermark, which is also called invertible watermark, or erasable watermark, helps to recover back the original data after the content has been authenticated. Such reversibility is highly desired in some sensitive database applications, e.g. in military and medical data. Permanent distortion is one of the main drawbacks of the entire irreversible relational database watermarking schemes. In this paper, we design an authentication protocol based on an efficient time-stamp protocol, and we propose a blind reversible watermarking method that ensures ownership protection in the field of relational database watermarking. Whereas previous techniques have been mainly concerned with introducing permanent errors into the original data, our approach ensures one hundred percent recovery of the original database relation after the owner-specific watermark has been detected and authenticated. In the proposed watermarking method, we utilize a reversible data-embedding technique called prediction-error expansion on integers to achieve reversibility. The detection of the watermark can be completed successfully even when 95% of a watermarked relation tuples are deleted. Our extensive analysis shows that the proposed scheme is robust against various forms of database attacks, including adding, deleting, shuffling or modifying tuples or attributes.
Expert Systems With Applications | 2011
Ray-Shine Run; Shi-Jinn Horng; Wei-Hung Lin; Tzong-Wann Kao; Pingzhi Fan; Muhammad Khurram Khan
This paper proposes a blind watermarking scheme based on wavelet tree quantization for copyright protection. In such a quantization scheme, there exists a large significant difference while embedding a watermark bit 1 and a watermark bit 0; it then does not require any original image or watermark during watermark extraction process. As a result, the watermarked images look lossless in comparison with the original ones, and the proposed method can effectively resist common image processing attacks; especially for JPEG compression and low-pass filtering. Moreover, by designing an adaptive threshold value in the extraction process, our method is more robust for resisting common attacks such as median filtering, average filtering, and Gaussian noise. Experimental results show that the watermarked image looks visually identical to the original, and the watermark can be effectively extracted.
The Astrophysical Journal | 2011
Mark S. Bandstra; Eric C. Bellm; S. E. Boggs; Daniel Perez-Becker; Andreas Zoglauer; Hsiang-Kuang Chang; Jeng-Lun Chiu; Jau-Shian Liang; Y. H. Chang; Zong-Kai Liu; Wei-Che Hung; M.-H. A. Huang; S. J. Chiang; Ray-Shine Run; Chih-Hsun Lin; Mark Amman; Paul N. Luke; P. Jean; P. von Ballmoos; Cornelia B. Wunderer
The Nuclear Compton Telescope (NCT) is a balloon-borne Compton telescope designed for the study of astrophysical sources in the soft gamma-ray regime (200 keV–20 MeV). NCT’s 10 high-purity germanium crossedstrip detectors measure the deposited energies and three-dimensional positions of gamma-ray interactions in the sensitive volume, and this information is used to restrict the initial photon to a circle on the sky using the Compton scatter technique. Thus NCT is able to perform spectroscopy, imaging, and polarization analysis on soft gamma-ray sources. NCT is one of the next generation of Compton telescopes—the so-called compact Compton telescopes (CCTs)—which can achieve effective areas comparable to the Imaging Compton Telescope’s with an instrument that is a fraction of the size. The Crab Nebula was the primary target for the second flight of the NCT instrument, which occurred on 2009 May 17 and 18 in Fort Sumner, New Mexico. Analysis of 29.3 ks of data from the flight reveals an image of the Crab at a significance of 4σ . This is the first reported detection of an astrophysical source by a CCT.
Expert Systems With Applications | 2011
Ling-Yuan Hsu; Shi-Jinn Horng; Pingzhi Fan; Muhammad Khurram Khan; Yuh-Rau Wang; Ray-Shine Run; Jui-Lin Lai; Rong-Jian Chen
Research highlights? A modified turbulent particle swarm optimization (MTPSO) model is proposed to solve the planar graph. ? MTPSO combines walking one strategy, assessment strategy and turbulent strategy. ? MTPSO can solve the four-colors problem efficiently and accurately. In this paper, we proposed a modified turbulent particle swarm optimization (named MTPSO) model for solving planar graph coloring problem based on particle swarm optimization. The proposed model is consisting of the walking one strategy, assessment strategy and turbulent strategy. The proposed MTPSO model can solve the planar graph coloring problem using four-colors more efficiently and accurately. Compared to the results shown in Cui et al. (2008), not only the experimental results of the proposed model can get smaller average iterations but can get higher correction coloring rate when the number of nodes is greater than 30.
parallel and distributed computing: applications and technologies | 2009
Shi-Jinn Horng; Yuan-Hsin Chen; Ray-Shine Run; Rong-Jian Chen; Jui-Lin Lai; Kevin Octavius Sentosal
In a multimodal biometric system, the effective fusion method is necessary for combining information from various single modality systems. In this paper we examined the performance of sum rule-based score level fusion and Support Vector Machines (SVM)-based score level fusion. Three biometric characteristics were considered in this study: fingerprint, face, and finger vein. We also proposed a new robust normalization scheme (Reduction of High-scores Effect normalization) which is derived from min-max normalization scheme. Experiments on four different multimodal databases suggest that integrating the proposed scheme in sum rule-based fusion and SVM-based fusion leads to consistently high accuracy.