Kai-Yeung Siu
University of California, Irvine
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kai-Yeung Siu.
SIAM Journal on Discrete Mathematics | 1991
Kai-Yeung Siu; Jehoshua Bruck
Linear threshold elements (LTEs) are the basic processing elements in artificial neural networks. An LTE computes a function that is a sign of a weighted sum of the input variables. The weights are arbitrary integers; actually, they can be very big integers—exponential in the number of input variables. However, in practice, it is very difficult to implement big weights. So the natural question that may be asked is whether there is an efficient way to simulate a network of LTEs with big weights by a network of LTEs with small weights. The following results are proved: (1) every LTE with big weights can be simulated by a depth-3, polynomial size network of LTEs with small weights; and (2) every depth-d, polynomial size network of LTEs with big weights can be simulated by a depth-
IEEE Transactions on Computers | 1991
Kai-Yeung Siu; Vwani P. Roychowdhury
( 2d + 1 )
IEEE ACM Transactions on Networking | 2001
Paolo Narvaez; Kai-Yeung Siu; Hong-Yi Tzeng
, polynomial size network of LTEs with small weights. To prove these results, tools from harmonic analysis of Boolean functions are used. The technique is quite general; it provides insights to some other problems. For e...
IEEE Transactions on Information Theory | 1993
Kai-Yeung Siu; Jehoshua Bruck; Thomas Hofmeister
The tradeoffs between the depth (i.e., the time for parallel computation) and the size (i.e., the number of threshold gates) in neural networks are studied. The authors focus the study on the neural computations of symmetric Boolean functions and some arithmetic functions. It is shown that a significant reduction in the size is possible for symmetric functions and some arithmetic functions, at the expense of a small constant increase in depth. In the process, several neural networks which have the minimum size among all the known constructions have been developed. Results on implementing symmetric functions can be used to improve results about arbitrary Boolean functions. In particular, it is shown that any Boolean function can be computed in a depth-3 neural network with O(2/sup n/ /sup 2/) threshold gates; it is also proven that at least Omega (2/sup n/ /sup 3/) threshold gates are required. >
Proceedings of the IEEE | 1990
Kai-Yeung Siu; Jehoshua Bruck
A key functionality in todays widely used interior gateway routing protocols such as OSPF and IS-IS involves the computation of a shortest path tree (SPT). In many existing commercial routers, the computation of an SPT is done from scratch following changes in the link states of the network. As there may coexist multiple SPTs in a network with a set of given link states, such recomputation of an entire SPT not only is inefficient but also causes frequent unnecessary changes in the topology of an existing SPT and creates routing instability.This paper presents a new dynamic SPT algorithm that makes use of the structure of the previously computed SPT. Our algorithm is derived by recasting the SPT problem into an optimization problem in a dual linear programming framework, which can also be interpreted using a ball-and-string model. In this model, the increase (or decrease) of an edge weight in the tree corresponds to the lengthening (or shortening) of a string. By stretching the strings until each node is attached to a tight string, the resulting topology of the model defines an (or multiple) SPT(s). By emulating the dynamics of the ball-and-string model, we can derive an efficient algorithm that propagates changes in distances to all affected nodes in a natural order and in a most economical way. Compared with existing results, our algorithm has the best-known performance in terms of computational complexity as well as minimum changes made to the topology of an SPT. Rigorous proofs for correctness of our algorithm and simulation results illustrating its complexity are also presented.
SIAM Journal on Discrete Mathematics | 1994
Kai-Yeung Siu; Vwani P. Roychowdhury
An artificial neural network (ANN) is commonly modeled by a threshold circuit, a network of interconnected processing units called linear threshold gates. It is shown that ANNs can be much more powerful than traditional logic circuits, assuming that each threshold gate can be built with a cost that is comparable to that of AND/OR logic gates. In particular, the main results indicate that powering and division can be computed by polynomial-size ANNs of depth 4, and multiple product can be computed by polynomial-size ANNs of depth 5. Moreover, using the techniques developed, a previous result can be improved by showing that the sorting of n n-bit numbers can be carried out in a depth-3 polynomial-size ANN. Furthermore, it is shown that the sorting network is optimal in depth. >
Archive | 1994
Vwani P. Roychowdhury; Kai-Yeung Siu; Alon Orlitsky
A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n-bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions. >
acm special interest group on data communication | 1995
Kai-Yeung Siu; Raj Jain
Let
IEEE Transactions on Information Theory | 1994
Kai-Yeung Siu; Vwani P. Roychowdhury; Thclmas Kailath
\widehat{LT}_d
IEEE Transactions on Neural Networks | 1995
Vwani P. Roychowdhury; Kai-Yeung Siu
denote the class of functions that can be computed by depth-