Sy-Yen Kuo
National Taiwan University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sy-Yen Kuo.
international world wide web conferences | 2004
Yao-Wen Huang; Fang Yu; Christian Hang; Chung-Hung Tsai; D. T. Lee; Sy-Yen Kuo
Security remains a major roadblock to universal acceptance of the Web for many kinds of transactions, especially since the recent sharp increase in remotely exploitable vulnerabilities have been attributed to Web application bugs. Many verification tools are discovering previously unknown vulnerabilities in legacy C programs, raising hopes that the same success can be achieved with Web applications. In this paper, we describe a sound and holistic approach to ensuring Web application security. Viewing Web application vulnerabilities as a secure information flow problem, we created a lattice-based static analysis algorithm derived from type systems and typestate, and addressed its soundness. During the analysis, sections of code considered vulnerable are instrumented with runtime guards, thus securing Web applications in the absence of user intervention. With sufficient annotations, runtime overhead can be reduced to zero. We also created a tool named.WebSSARI (Web application Security by Static Analysis and Runtime Inspection) to test our algorithm, and used it to verify 230 open-source Web application projects on SourceForge.net, which were selected to represent projects of different maturity, popularity, and scale. 69 contained vulnerabilities. After notifying the developers, 38 acknowledged our findings and stated their plans to provide patches. Our statistics also show that static analysis reduced potential runtime overhead by 98.4%.
IEEE Design & Test of Computers | 1987
Sy-Yen Kuo; W. Kent Fuchs
Yield degradation from physical failures in large memories and processor arrays is of significant concern to semiconductor manufacturers. One method of increasing the yield for iterated arrays of memory cells or processing elements is to incorporate spare rows and columns in the die or wafer. These spare rows and columns can then be programmed into the array. The authors discuss the use of CAD approaches to reconfigure such arrays. The complexity of optimal reconfiguration is shown to be NP-complete. The authors present two algorithms for spare allocation that are based on graph-theoretic analysis. The first uses a branch-and-bound approach with early screening based on bipartite graph matching. The second is an efficient polynomial time-approximation algorithm. In contrast to existing greedy and exhaustive search algorithms, these algorithms provide highly efficient and flexible reconfiguration analysis.
IEEE Transactions on Software Engineering | 2003
Chin-Yu Huang; Michael R. Lyu; Sy-Yen Kuo
In this paper, we describe how several existing software reliability growth models based on Nonhomogeneous Poisson processes (NHPPs) can be comprehensively derived by applying the concept of weighted arithmetic, weighted geometric, or weighted harmonic mean. Furthermore, based on these three weighted means, we thus propose a more general NHPP model from the quasi arithmetic viewpoint. In addition to the above three means, we formulate a more general transformation that includes a parametric family of power transformations. Under this general framework, we verify the existing NHPP models and derive several new NHPP models. We show that these approaches cover a number of well-known models under different conditions.
IEEE Journal of Solid-state Circuits | 2007
Hong-Wei Huang; Ke-Horng Chen; Sy-Yen Kuo
This paper proposes temperature-independent load sensor (LS), optimum width controller (OWC), optimum dead-time controller (ODC), and tri-mode operation to achieve high efficiency over an ultra-wide-load range. Higher power efficiency and wider loading current range require rethinking the control method for DC-DC converters. Therefore, a highly efficient tri-mode DC-DC converter is invented in this paper for system-on-chip (SoC) applications, which is switched to sleeping mode at very light load condition or to high-speed mode at heavy load condition. The efficiency improvement is upgraded by inserting new proposed dithering skip modulation (DSM) between conventional pulse-width modulation (PWM) and pulse-frequency modulation (PFM). In other words, an efficiency-improving DSM operation raises the efficiency drop because of transition from PWM to PFM. Importantly, DSM mode can dynamically skip the number of gate driving pulses, which is inverse proportional to load current. Simplistically and qualitatively stated, the novel load sensor automatically selects optimum modulation method and power MOSFET width to achieve high efficiency over a wide load range. Moreover, optimum power MOSFET turn-on and turn-off delays in synchronous rectifiers and reduced ground bounce can save much switching loss by current-mode dead-time controller. Experimental results show the tri-mode operation can have high efficiency about 90% over a wide load current range from 3 to 500 mA. Owing to the effective mitigation of the switching loss contributed by optimum power MOSFET width and reduction of conduction loss contributed by optimum dead-times, the novel width and dead-time controllers achieve high efficiency about 95% at heavy load condition and maintain the highly efficient performance to very light load current about 0.1 mA.
IEEE Transactions on Vehicular Technology | 2012
Yu-Shan Liang; Wei-Ho Chung; Guo-Kai Ni; Ing-Yi Chen; Hongke Zhang; Sy-Yen Kuo
Interference control and quality-of-service (QoS) awareness are the major challenges for resource management in orthogonal frequency-division multiple access femtocell networks. This paper investigates a self-organization strategy for physical resource block (PRB) allocation with QoS constraints to avoid the co-channel and co-tiered interference. Femtocell self-organization including self-configuration and self-optimization is proposed to manage the large femtocell networks. We formulate the optimization problem for PRB assignments where multiple QoS classes for different services can be supported, and interference between femtocells can be completely avoided. The proposed formulation pursues the maximization of PRB efficiency. A greedy algorithm is developed to solve the resource allocation formulation. In the simulations, the proposed approach is observed to increase the system throughput by over 13% without femtocell interference. Simulations also demonstrate that the rejection ratios of all QoS classes are low and mostly below 10%. Moreover, the proposed approach improves the PRB efficiency by over 82% in low-loading scenario and 13% in high-loading scenario.
IEEE Transactions on Reliability | 2002
Chin-Yu Huang; Sy-Yen Kuo
This paper investigates a SRGM (software reliability growth model) based on the NHPP (nonhomogeneous Poisson process) which incorporates a logistic testing-effort function. SRGM proposed in the literature consider the amount of testing-effort spent on software testing which can be depicted as an exponential curve, a Rayleigh curve, or a Weibull curve. However, it might not be appropriate to represent the consumption curve for testing-effort by one of those curves in some software development environments. Therefore, this paper shows that a logistic testing-effort function can be expressed as a software-development/test-effort curve and that it gives a good predictive capability based on real failure-data. Parameters are estimated, and experiments performed on actual test/debug data sets. Results from applications to a real data set are analyzed and compared with other existing models to show that the proposed model predicts better. In addition, an optimal software release policy for this model, based on cost-reliability criteria, is proposed.
IEEE Transactions on Reliability | 1999
Sy-Yen Kuo; Shyue-Kung Lu; Fu-Min Yeh
For calculating terminal-pair reliability, most published algorithms are based on the sum of disjoint products. However, these tree-based partitions lack the capability to avoid redundant computation due to the isomorphic sub-problems. To overcome these problems, an efficient methodology to evaluate the terminal-pair reliability, based on edge expansion diagrams using OBDD (ordered binary decision diagram) is presented. First, the success path function of a given network is constructed based on OBDD by traversing the network with diagram-based edge expansion. Then the reliability of the network is obtained by directly evaluating on this OBDD recursively. The effectiveness of this approach is demonstrated by performing experiments on benchmarks collected by previous works including the larger networks (from 4 to 2/sup 99/ paths). A dramatic improvement, as demonstrated by the experimental results for a 2-by-n lattice network is that the number of OBDD nodes is only linearly proportional to the number of stages, and is much better than previous algorithms which have exponential complexity by using the sum of disjoint products. The CPU time of reliability calculation for a 100-stage lattice network is only about 2.5 seconds with 595 nodes generated on a SPARC 20 workstation with 128 MBytes of memory. Thus, with this approach, the terminal-pair reliability of large networks can be efficiently evaluated better than thought possible.
IEEE Transactions on Reliability | 2007
Chin-Yu Huang; Sy-Yen Kuo; Michael R. Lyu
Over the last several decades, many Software Reliability Growth Models (SRGM) have been developed to greatly facilitate engineers and managers in tracking and measuring the growth of reliability as software is being improved. However, some research work indicates that the delayed S-shaped model may not fit the software failure data well when the testing-effort spent on fault detection is not a constant. Thus, in this paper, we first review the logistic testing-effort function that can be used to describe the amount of testing-effort spent on software testing. We describe how to incorporate the logistic testing-effort function into both exponential-type, and S-shaped software reliability models. The proposed models are also discussed under both ideal, and imperfect debugging conditions. Results from applying the proposed models to two real data sets are discussed, and compared with other traditional SRGM to show that the proposed models can give better predictions, and that the logistic testing-effort function is suitable for incorporating directly into both exponential-type, and S-shaped software reliability models
IEEE Transactions on Reliability | 2001
Sy-Yen Kuo; Chin-Yu Huang; Michael R. Lyu
This paper proposes a new scheme for constructing software reliability growth models (SRGM) based on a nonhomogeneous Poisson process (NHPP). The main focus is to provide an efficient parametric decomposition method for software reliability modeling, which considers both testing efforts and fault detection rates (FDR). In general, the software fault detection/removal mechanisms depend on previously detected/removed faults and on how testing efforts are used. From practical field studies, it is likely that we can estimate the testing efforts consumption pattern and predict the trends of FDR. A set of time-variable, testing-effort-based FDR models were developed that have the inherent flexibility of capturing a wide range of possible fault detection trends: increasing, decreasing, and constant. This scheme has a flexible structure and can model a wide spectrum of software development environments, considering various testing efforts. The paper describes the FDR, which can be obtained from historical records of previous releases or other similar software projects, and incorporates the related testing activities into this new modeling approach. The applicability of our model and the related parametric decomposition methods are demonstrated through several real data sets from various software projects. The evaluation results show that the proposed framework to incorporate testing efforts and FDR for SRGM has a fairly accurate prediction capability and it depicts the real-life situation more faithfully. This technique can be applied to wide range of software systems.
IEEE Transactions on Reliability | 2002
Fu-Min Yeh; Shyue-Kung Lu; Sy-Yen Kuo
An efficient approach to determining the reliability of an undirected k-terminal network based on 2-terminal reliability functions is presented. First, a feasible set of (k-1) terminal-pairs is chosen, and the 2-terminal reliability functions of the (k-1) terminal-pairs are generated based on the edge expansion diagram using an OBDD (ordered binary decision diagram). Then the k-terminal reliability function can be efficiently constructed by combining these (k-1) reliability expressions with the Boolean and operation. Because building 2-terminal reliability functions and reducing redundant computations by merging reliability functions can be done very efficiently, the proposed approaches are much faster than those which directly expand the entire network or directly factor the k-terminal networks. The effectiveness of this approach is demonstrated by performing experiments on several large benchmark networks. An example of appreciable improvement is that the evaluation of the reliability of a source-terminal 3/spl times/10 all-terminal network took only 2.4 seconds on a SPARC 20 workstation. This is much faster than previous factoring-algorithms.