Chee-Kheong Siew
Nanyang Technological University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chee-Kheong Siew.
international symposium on neural networks | 2004
Guang-Bin Huang; Qin-Yu Zhu; Chee-Kheong Siew
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.
international conference on networking | 2011
Pratyush Manjul; Vimaladhithan Balasubramanian; Yunzhi Li; Yuan Shi; Yuqing Liu; Jing Xu; Qixin Xie; Jia Deng; Heng Li; Chee-Kheong Siew
Since the advent of ad-hoc networking, it has been viewed as a potential multi-application technology. It holds promise for a number of fields like military, medical, logistics and disaster recovery applications. Ad-hoc networks have been an object of interest and research not just because of the prospects they offer but also because of the numerous issues that they face. One such issue is the distribution of high bandwidth real-time data via multicasting. This paper presents a comparative study of multicasting of video and video-like data using two different ad-hoc routing protocols, viz. OLSR and PUMA. Our NS2 simulations show that OLSR produces higher throughput and lower latency.
international conference on signal processing | 2007
Chee-Kheong Siew; Tony Cui; Moshe Zukerman
This paper proposes a multicast service model and a comprehensive solution to the coexistence of guaranteed video multicast traffic and best-effort traffic for the next generation Internet. Our multicast service model specifies the services offered to these two traffic types. Our solution uses the flow-state-dependent dynamic priority scheduling (FDPS) algorithm for service curve assurance together with congestion control and connection admission control functions to guarantee crucial video multicast layers. Our solution adapts efficiently to congested links by identifying and discarding as many less important packets as necessary. The solution facilitates the delivery of best-effort traffic by reserving a pre-specified fraction of the link capacity for such traffic. We demonstrate, by extensive ns-2 simulations that the base video-layer is protected and scheduling delay is controlled. Acceptable performance for best-effort traffic is also achieved.
international conference on control and automation | 2003
Qin-Yu Zhu; Guang-Bin Huang; Chee-Kheong Siew
In some practical applications, the request of time complexity is more rigidder than space complexity. However, current neural networks seem far from the stardard of real-time applications. In the previous paper of Huang[1], it has been proved in a novel constructive method that two-hidden-layer feedforward networks (TLFNs) with 2√(m+2)N(«N) hidden neurons can learn any N distinct samples (Xi,ti) with any arbitrarily small error, where m in the required number of output neurons. On the theoritical basis of previous results[1], this paper will introduce an improved constructive method of TLFN with real-time learning capacity. The results shown in this paper will prove that both the training and generalization errors of the new TLFN can reach arbitrarily small values if sufficient distinctive training samples are provided. Additionally, this paper will use some experimental results to show the comparison of learning time with traditional gradient descent based learning. methods such as back-propogation (BP) algorithm. The learning algorithm for two-hidden-layer feedforward neural net-works is able to learn any set of oberservations just in one short iteration (one instead of large number of learning epoches) with acceptable learning and testing accuracy
pacific rim conference on multimedia | 2003
Chee-Kheong Siew; Jin Sheng
Admission control is one of the key components in QoS (quality of service) provisioning. Unless admission of a new flow does not cause QoS violation of existing flows, it cannot be admitted. In the past, the inequality for stability, /spl Sigma//spl rho//sub i//spl les/C/sub j/, where /spl rho//sub i/ is the rate of flow i and C/sub j/ is the link capacity of link j , is used to admit new connections without delay guarantee. Recently, the rate-variance method which utilizes the properties of aggregated flows was proposed to admit flows based on statistical guarantees. In this paper, we propose a new admission control mechanism for deterministic and statistical guarantees in a network with a simple core using FCFS service and intelligent ingress nodes. In this model, we have reduced the admission control calculations to two simple delay computations: a shaper delay and a network delay. If the ratio l//spl rho/, where l is the packet size and /spl rho/ is the service rate, is kept constant, the end-to-end delay of each flow is tightly bounded. This is verified by additional computer simulations. The idea involves the conversion of all bursty flows to pseudo-CBR flows before injecting the traffic into the core network. This approach reduces the complexity of the core and makes the network scalable.
pacific rim conference on multimedia | 2003
Lei Chen; Guang-Bin Huang; Chee-Kheong Siew
A specific two-hidden-layer feedforward networks (TLFNs) proposed by G.B. Huang (2003) is presented in this paper. A method is introduced to simplify the structure of the TLFNs by introducing a new type of quantizers that unite two previous neurons A/sup (p)/ and B/sup (p)/ into a single neuron. Those new quantizers choose a special type of function as the neural networks activation function, which leads to the new TLFNs with 2/spl radic/((m+1)N) hidden neurons can learn N distinct samples (x/sub i/, t/sub i/) with negligibly small error, where m is the number of output neurons, and unlike Huangs TLFNs require 2/spl radic/((m+2)N) hidden neurons. Moreover, it is not necessary to estimate the quantizer value U defined in Huangs TLFNs, which is fixed in our new model of TLFNs. It can reduce significantly and markedly the complexity and computation of neural networks.
ieee international conference on intelligent processing systems | 1997
Chee-Kheong Siew; Zhihua Jiao
A modified p-persistence CSMA/CD (MP-CSMA/CD) is proposed to maximize the throughput performance of a bus local area network. From the two algorithms applied to maximize the throughput performance, the artificial neural network (ANN) produces a better performance than the gradient access algorithm. Owing to noise in the throughput measurements, the drift in the probability of transmission, p, may invalidate its performance. We investigated the problems inherent in the gradient access algorithm which resulted in a drift in p. Simulation results show that this problem is solved by an ANN algorithm.
pacific rim conference on multimedia | 2003
Xiaohua Tang; Junhua Tang; Guang-Bin Huang; Chee-Kheong Siew
IEE Proceedings - Communications | 2005
Chee-Kheong Siew; Gang Feng; F. Long; M.-H. Er
IEE Proceedings - Communications | 2006
Chee-Kheong Siew; Meng-Hwa Er