Shen-Chuan Tai
National Cheng Kung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shen-Chuan Tai.
IEEE Transactions on Biomedical Engineering | 2005
Shen-Chuan Tai; Chia-Chun Sun; Wen-Chien Yan
A two-dimensional (2-D) wavelet-based electrocardiogram (ECG) data compression method is presented which employs a modified set partitioning in hierarchical trees (SPIHT) algorithm. This modified SPIHT algorithm utilizes further the redundancy among medium- and high-frequency subbands of the wavelet coefficients and the proposed 2-D approach utilizes the fact that ECG signals generally show redundancy between adjacent beats and between adjacent samples. An ECG signal is cut and aligned to form a 2-D data array, and then 2-D wavelet transform and the modified SPIHT can be applied. Records selected from the MIT-BIH arrhythmia database are tested. The experimental results show that the proposed method achieves high compression ratio with relatively low distortion and is effective for various kinds of ECG morphologies.
international symposium on communications, control and signal processing | 2008
Shen-Chuan Tai; Shih Ming Yang
We present a simple and fast algorithm for image noise estimation. The input image is assumed to be corrupted by additive zero mean Gaussian noise. To exclude structures or details from contributing to the noise variance estimation, a simple edge detection algorithm using first-order gradients is applied first. Then a Laplacian operator followed by an averaging over the whole image will provide very accurate noise variance estimation. There is only one parameter which is self-determined and adaptive to the image contents. Simulation results show that the proposed algorithm performs well for different types of images over a large range of noise variances. Performance comparisons against other approaches are also provided.
IEEE Transactions on Communications | 1996
Shen-Chuan Tai; C. C. Lai; Yih-Chuan Lin
In this paper, two efficient codebook searching algorithms for vector quantization (VQ) are presented. The first fast search algorithm utilizes the compactness property of signal energy on transform domain and the geometrical relations between the input vector and every codevector to eliminate those codevectors that have no chance to be the closest codeword of the input vector. It achieves a full search equivalent performance. As compared with other fast methods of the same kind, this algorithm requires the fewest multiplications and the least total times of distortion measurements. Then, a suboptimal searching method, which sacrifices the reconstructed signal quality to speed up the search of nearest neighbor, is presented. This algorithm performs the search process on predefined small subcodebooks instead of the whole codebook for the closest codevector. Experimental results show that this method not only needs less CPU time to encode an image but also encounters less loss of reconstructed signal quality than tree-structured VQ does.
IEEE Transactions on Circuits and Systems for Video Technology | 2005
Shen-Chuan Tai; Yen-Yu Chen; Shin-Feng Sheu
Increasing the bandwidth or bit rate in real-time video applications to improve the quality of images is typically impossible or too expensive. Postprocessing appears to be the most feasible solution because it does not require any existing standards to be changed. Markedly reducing blocking effects can increase compression ratios for a particular image quality or improve the quality with respect to the specific bit rate of compression. This paper proposes a novel deblocking algorithm based on three filtering modes in terms of the activity across block boundaries. By properly considering the masking effect of the human visual system, an adaptive filtering decision is integrated into the deblocking process. According to three different deblocking modes appropriate for local regions with different characteristics, the perceptual and objective quality are improved without over smoothing the image details or insufficient reducing the strong blocking effect on the flat region. According to the simulation results, the proposed method outperforms methods of deblocking MPEG-4 with respect to peak signal-to-noise ratios and computational complexity.
international conference of the ieee engineering in medicine and biology society | 2000
Shen-Chuan Tai; Yung-Gi Wu; Chang-Wei Lin
A new three-dimensional (3-D) discrete cosine transform (DCT) coder for medical images is presented. In the proposed method, a segmentation technique based on the local energy magnitude is used to segment subblocks of the image into different energy levels. Then, those subblocks with the same energy level are gathered to form a 3-D cuboid. Finally, 3-D DCT is employed to compress the 3-D cuboid individually. Simulation results show that the reconstructed images achieve a bit rate lower than 0.25 bit per pixel even when the compression ratios are higher than 35. As compared with the results by JPEG and other strategies, it is found that the proposed method achieves better qualities of decoded images.
IEEE Journal of Biomedical and Health Informatics | 2014
Shen-Chuan Tai; Zih Siou Chen; Wei Ting Tsai
It is difficult for radiologists to identify the masses on a mammogram because they are surrounded by complicated tissues. In current breast cancer screening, radiologists often miss approximately 10-30% of tumors because of the ambiguous margins of lesions and visual fatigue resulting from long-time diagnosis. For these reasons, many computer-aided detection (CADe) systems have been developed to aid radiologists in detecting mammographic lesions which may indicate the presence of breast cancer. This study presents an automatic CADe system that uses local and discrete texture features for mammographic mass detection. This system segments some adaptive square regions of interest (ROIs) for suspicious areas. This study also proposes two complex feature extraction methods based on cooccurrence matrix and optical density transformation to describe local texture characteristics and the discrete photometric distribution of each ROI. Finally, this study uses stepwise linear discriminant analysis to classify abnormal regions by selecting and rating the individual performance of each feature. Results show that the proposed system achieves satisfactory detection performance.
IEEE\/OSA Journal of Display Technology | 2008
Shen-Chuan Tai; Ying-Ru Chen; Zheng-Bin Huang; Chuen-Ching Wang
This paper presents a robust true motion estimation algorithm, designated as MPMVP (Multi-pass and Motion Vector Propagation), to enhance the accuracy of the motion vector fields in frame rate up-conversion applications. The MPMVP uses a multi-pass scheme to progressively refine approximate motion vectors to true motion vectors based upon the motion information acquired in previous passes. The multi-pass motion estimation process uses a large block size to detect the motion vectors within the objects themselves and small block sizes to detect the motion vectors along the object boundaries. Actually, the block size is progressively reduced during the search process. When the motion vector of a block is considered to be sufficiently accurate for motion estimation purposes, the block is said to be converged and the local motion vector search process terminates. A novel technique, referred to as motion vector propagation, is then applied to propagate the motion vector of the converged block to its neighboring blocks. This technique not only ensures that neighboring motion vectors within the same object have a high degree of spatial correlation, but also accelerates the convergence of the motion vectors in the neighboring blocks and therefore reduces the overall computational time and expense of the multi-pass motion vector search procedure. A novel distortion criterion is proposed to enhance the tolerance of the traditional sum-of-absolute-difference measurement technique applied in the motion estimation scheme to noise and shadow effects. The experimental results demonstrate that the proposed true motion estimation algorithm outperforms the traditional full search, 3DRS and TCSBP algorithms in terms of both the smoothness of the generated motion vector fields and the visual quality of the up-converted frames.
international conference of the ieee engineering in medicine and biology society | 2001
Yung-Gi Wu; Shen-Chuan Tai
Due to bandwidth and storage limitations, medical images must be compressed before transmission and storage. However, the compression reduces the image fidelity, especially when the images are compressed at low bit rates. The reconstructed images suffer from blocking artifacts and the image quality is severely degraded under high compression ratios. In this paper, we present a strategy to increase the compression ratio with low computational burden and excellent decoded quality. We regard the discrete cosine transform as a bandpass filter to decompose a sub-block into equal-sized bands. After a band-gathering operation, a high similarity property among the bands is found. By utilizing the similarity property, the bit rate of compression can be greatly reduced. Meanwhile, the characteristics of the original image are not sacrificed. Thus, it can avoid the misdiagnosis of diseases. Simulations were carried out on different kinds of medical images to demonstrate that the proposed method achieves better performance when compared to other existing transform coding schemes, such as JPEG, in terms of bit rate and quality. For the case of angiogram images, the peak signal-to-noise-ratio gain is 13.5 dB at the same bit rate of 0.15 bits per pixel when compared to the JPEG compression. As for the other kinds of medical images, their benefits are not so obvious as for angiogram images; however, the gains for them are still 4-8 dB at high compression ratios. Two doctors were invited to verify the decoded image quality; the diagnoses of all the test images were correct when the compression ratios were below 20.
IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 1998
Yih-Chuan Lin; Shen-Chuan Tai
This paper presents a novel algorithm for speeding up the codebook design in image vector quantization that exploits the correlation among the pixels in an image block to compress the computational complexity of calculating the squared Euclidean distortion measures, and uses the similarity between the codevectors in the consecutive code-books during the iterative clustering-process to reduce the number of codevectors necessary to be checked for one codebook search. Verified test results have shown that the proposed algorithm can provide almost 98% reduction of the execution time when compared to the conventional Linde-Buzo-Gray (LBG) algorithm.
Medical & Biological Engineering & Computing | 1991
Shen-Chuan Tai
An ECG sampled at a rate of 250 samples s−1 or more produces a large amount of redundant data that are difficult to store and transmit. In the paper, a real-time ECG data compressor, SLOPE, is presented. SLOPE considers some adjacent samples as a vector, and this vector is extended if the coming sample falls in a fan spanned by this vector and a theshold angle; otherwise, it is delimited as a linear segment. By this means SLOPE repeatedly delimits linear segments of different lengths and different slopes. The Huffman codes for the parameters to describe this linear segment are transmitted for that linear segment. SLOPEa which is a slightly modified version of SLOPE, is used to compress ambulatory ECG data. All the operations used by SLOPE and SLOPEa, are simple integer operations, both SLOPE and SLOPEa being real-time compressors. Experimental results show that an average of 192 bits per channel per second (bpcs) for each ECG signal is obtained by SLOPE and an average of 148 bpcs for each ECG signal is obtained by SLOPE1.