Ming-Chao Chiang
National Sun Yat-sen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ming-Chao Chiang.
computer vision and pattern recognition | 1997
Ming-Chao Chiang; Terrance E. Boult
Until now, all super-resolution algorithms have presumed that the images were taken under the same illumination conditions. This paper introduces a new approach to super-resolution, based on edge models and a local blur estimate, which circumvents these difficulties. The paper presents the theory and the experimental results using the new approach.
Information Sciences | 2011
Ming-Chao Chiang; Chun-Wei Tsai; Chu-Sing Yang
This paper presents an efficient algorithm, called pattern reduction (PR), for reducing the computation time of k-means and k-means-based clustering algorithms. The proposed algorithm works by compressing and removing at each iteration patterns that are unlikely to change their membership thereafter. Not only is the proposed algorithm simple and easy to implement, but it can also be applied to many other iterative clustering algorithms such as kernel-based and population-based clustering algorithms. Our experiments-from 2 to 1000 dimensions and 150 to 10,000,000 patterns-indicate that with a small loss of quality, the proposed algorithm can significantly reduce the computation time of all state-of-the-art clustering algorithms evaluated in this paper, especially for large and high-dimensional data sets.
Image and Vision Computing | 2000
Ming-Chao Chiang; Terrance E. Boult
This paper introduces a new algorithm for enhancing image resolution from an image sequence. The approach we propose herein uses the integrating resampler for warping. The method is a direct computation, which is fundamentally different from the iterative back-projection approaches proposed in previous work. This paper shows that image-warping techniques may have a strong impact on the quality of image resolution enhancement. By coupling the degradation model of the imaging system directly into the integrating resampler, we can better approximate the warping characteristics of real sensors, which also significantly improve the quality of super-resolution images. Examples of super-resolutions are given for gray-scale images. Evaluations are made visually by comparing the resulting images and those using bi-linear resampling and back-projection and quantitatively using OCR as a fundamental measure. The paper shows that even when the images are qualitatively similar, quantitative differences appear in machine processing.
ieee international conference on cloud computing technology and science | 2014
Chun-Wei Tsai; Wei-Cheng Huang; Meng-Hsiu Chiang; Ming-Chao Chiang; Chu-Sing Yang
Rule-based scheduling algorithms have been widely used on many cloud computing systems because they are simple and easy to implement. However, there is plenty of room to improve the performance of these algorithms, especially by using heuristic scheduling. As such, this paper presents a novel heuristic scheduling algorithm, called hyper-heuristic scheduling algorithm (HHSA), to find better scheduling solutions for cloud computing systems. The diversity detection and improvement detection operators are employed by the proposed algorithm to dynamically determine which low-level heuristic is to be used in finding better candidate solutions. To evaluate the performance of the proposed method, this study compares the proposed method with several state-of-the-art scheduling algorithms, by having all of them implemented on CloudSim (a simulator) and Hadoop (a real system). The results show that HHSA can significantly reduce the makespan of task scheduling compared with the other scheduling algorithms evaluated in this paper, on both CloudSim and Hadoop.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2011
Ming-Chao Chiang; Tse-Chen Yeh; Guo-Fu Tseng
In this paper, we present a fast cycle-accurate instruction set simulator (CA-ISS) for system-on-chip development based on QEMU and SystemC. Even though most state-of-the-art commercial tools have tried very hard to provide all the levels of details to satisfy the different requirements of the software designer, the hardware designer, and even the system architect, the hardware/software co-simulation speed is dramatically slow when co-simulating the hardware models at the register-transfer level (RTL) with a full-fledged operating system (OS). Our experimental results show that the combination of QEMU and SystemC can make the co-simulation at the CA level much faster than the conventional RTL simulation, even with a full-fledged operating system up and running. Furthermore, the statistics indicate that with every instruction executed and every memory accessed since power-on traced at the CA level, it takes 28m15.804s on average to boot up a full-fledged Linux kernel, even on a personal computer. Compared to the kernel boot time reported by Xilinx and SiCortex, the proposed CA-ISS is about 6.09 times faster compared to “SystemC without trace” of Xilinx and about 30.32 times faster compared to “SystemC models converted from RTL” of SiCortex. The main contributions of this paper are threefold: 1) a hardware/software co-simulation environment capable of running a full-fledged OS at the early stage of the electronic system level design flow at an acceptable simulation speed is proposed; 2) a virtual platform constructed using the proposed CA-ISS as the processor model can be used to estimate the performance of a target system from system perspective, which all the previous works, such as QEMU-SystemC, do not provide; and 3) such a virtual platform also provides the modeling capability from the transaction level down to the CA level or the other way around.
workshop on applications of computer vision | 1996
Ming-Chao Chiang; Terrance E. Boult
This paper introduces a new algorithm for enhancing image resolution from an image sequence. The approach we propose herein uses the integrating resampler proposed by M. Chiang and T. Boult (1996) as the underlying resampling algorithm. Moreover, it is a direct method, which is fundamentally different from the iterative, back-projection approaches proposed in previous work. We show that image warping techniques may have a strong impact on the quality of image resolution enhancement. By coupling the degradation model of the imaging system directly into the integrating resampler, we can better approximate the warping characteristics of real sensors, which also highly improve the quality of super-resolution images. Examples of super-resolutions are given for gray-scale images. Evaluations are made by comparing the resulting images and those using bi-linear resampling and back-projection. Results from our experiments show that integrating resampler outperforms traditional bi-linear resampling.
mediterranean electrotechnical conference | 2010
Tse-Chen Yeh; Guo-Fu Tseng; Ming-Chao Chiang
This paper presents a fast cycle-accurate instruction set simulator (CA-ISS) based on QEMU and SystemC. The CA-ISS can be used for design space exploration and as the processor core for virtual platform construction at the cycle-accurate level. Even though most state-of-the-art commercial tools try to provide all the levels of details to satisfy the different requirements of the software designer, the hardware designer, or even the system architect, the hardware/software co-simulation speed is dramatically slow when co-simulating the hardware models at the register-transfer level with a full-fledged operating system. In this paper, we show that the combination of QEMU and SystemC can make the co-simulation at the cycle-accurate level extremely fast, even with a full-fledged operating system up and running. Our experimental results indicate that with every instruction executed and every memory accessed since power-on traced at the cycle-accurate level, it takes less than 17 minutes on average to boot up a full-fledged Linux kernel, even on a laptop.
Applied Soft Computing | 2013
Chun-Wei Tsai; Shih-Pang Tseng; Chu-Sing Yang; Ming-Chao Chiang
This paper presents an effective and efficient method for speeding up ant colony optimization (ACO) in solving the codebook generation problem. The proposed method is inspired by the fact that many computations during the convergence process of ant-based algorithms are essentially redundant and thus can be eliminated to boost their convergence speed, especially for large and complex problems. To evaluate the performance of the proposed method, we compare it with several state-of-the-art metaheuristic algorithms. Our simulation results indicate that the proposed method can significantly reduce the computation time of ACO-based algorithms evaluated in this paper while at the same time providing results that match or outperform those ACO by itself can provide.
soft computing | 2015
Chun-Wei Tsai; Ko-Wei Huang; Chu-Sing Yang; Ming-Chao Chiang
This paper presents a high-performance method to reduce the time complexity of particle swarm optimization (PSO) and its variants in solving the partitional clustering problem. The proposed method works by adding two additional operators to the PSO-based algorithms. The pattern reduction operator is aimed to reduce the computation time, by compressing at each iteration patterns that are unlikely to change the clusters to which they belong thereafter while the multistart operator is aimed to improve the quality of the clustering result, by enforcing the diversity of the population to prevent the proposed method from getting stuck in local optima. To evaluate the performance of the proposed method, we compare it with several state-of-the-art PSO-based methods in solving data clustering, image clustering, and codebook generation problems. Our simulation results indicate that not only can the proposed method significantly reduce the computation time of PSO-based algorithms, but it can also provide a clustering result that matches or outperforms the result PSO-based algorithms by themselves can provide.
systems, man and cybernetics | 2007
Chun-Wei Tsai; Chu-Sing Yang; Ming-Chao Chiang
In this paper, we present an efficient algorithm, called pattern reduction (PR) algorithm, to reduce the time required for data clustering based on iterative clustering algorithms. Conceptually similar to a lossy data compression scheme, this algorithm removes at each iteration those data patterns that are close to the centroid of a cluster or remain in the same cluster for a certain number of iterations in a row and are thus unlikely to be moved again from one cluster to another at later iterations by computing a new pattern to represent all the data patterns removed. Our simulation results - from 2 to 1,000 dimensions and 150 to 6,000,000 patterns - indicate that the proposed algorithm can reduce the computation time of k-means, genetic k-means algorithm (GKA) and k-means with genetic algorithm (KGA) from 10% up to about 80% and that for high dimensional data sets, it can even reduce the computation time for more than 70%.