Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vijay S. Iyengar is active.

Publication


Featured researches published by Vijay S. Iyengar.


knowledge discovery and data mining | 2002

Transforming data to satisfy privacy constraints

Vijay S. Iyengar

Data on individuals and entities are being collected widely. These data can contain information that explicitly identifies the individual (e.g., social security number). Data can also contain other kinds of personal information (e.g., date of birth, zip code, gender) that are potentially identifying when linked with other available data sets. Data are often shared for business or legal reasons. This paper addresses the important issue of preserving the anonymity of the individuals or entities during the data dissemination process. We explore preserving the anonymity by the use of generalizations and suppressions on the potentially identifying portions of the data. We extend earlier works in this area along various dimensions. First, satisfying privacy constraints is considered in conjunction with the usage for the data being disseminated. This allows us to optimize the process of preserving privacy for the specified usage. In particular, we investigate the privacy transformation in the context of data mining applications like building classification and regression models. Second, our work improves on previous approaches by allowing more flexible generalizations for the data. Lastly, this is combined with a more thorough exploration of the solution space using the genetic algorithm framework. These extensions allow us to transform the data so that they are more useful for their intended purpose while satisfying the privacy constraints.


IEEE Design & Test of Computers | 1987

Transition Fault Simulation

John A. Waicukauski; Eric Lindbloom; Barry K. Rosen; Vijay S. Iyengar

Delay fault testing is becoming more important as VLSI chips become more complex. Components that are fragments of functions, such as those in gate-array designs, need a general model of a delay fault and a feasible method of generating test patterns and simulating the fault. The authors present such a model, called a transition fault, which when used with parallel-pattern, single-fault propagation, is an efficient way to simulate delay faults. The authors describe results from 10 benchmark designs and discuss add-ons to a stuck fault simulator to enable transition fault simulation. Their experiments show that delay fault simulation can be done of random patterns in less than 10% more time than needed for a stuck fault simulation.


IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 1990

On computing the sizes of detected delay faults

Vijay S. Iyengar; Barry K. Rosen; John A. Waicukauski

Defects in integrated circuits can cause delay faults of various sizes. Testing for delay faults has the goal of detecting a large fraction of these faults for a wide range of fault sizes. Hence, an evaluation scheme for a delay fault test must not only compute whether or not a delay fault was detected, but also calculate the sizes of detected delay faults. Delay faults have the counterintuitive property that a test for a fault of one size need not be a test for a similar fault of a larger size. This makes it difficult to answer questions about the sizes of delay faults detected by a set of tests. A model for delay faults that answers such questions correctly, but with calculations simple enough to be done for large circuits, is presented. >


Journal of Machine Learning Research | 2002

Recommender systems using linear classifiers

Tong Zhang; Vijay S. Iyengar

Recommender systems use historical data on user preferences and other available data on users (for example, demographics) and items (for example, taxonomy) to predict items a new user might like. Applications of these methods include recommending items for purchase and personalizing the browsing experience on a web-site. Collaborative filtering methods have focused on using just the history of user preferences to make the recommendations. These methods have been categorized as memory-based if they operate over the entire data to make predictions and as model-based if they use the data to build a model which is then used for predictions. In this paper, we propose the use of linear classifiers in a model-based recommender system. We compare our method with another model-based method using decision trees and with memory-based methods using data from various domains. Our experimental results indicate that these linear models are well suited for this application. They outperform a commonly proposed memory-based method in accuracy and also have a better tradeoff between off-line and on-line computational requirements.


IEEE Transactions on Very Large Scale Integration Systems | 1995

AVPGEN-A test generator for architecture verification

Ashok K. Chandra; Vijay S. Iyengar; D. Jameson; R. V. Jawalekar; Indira Nair; Barry K. Rosen; Michael P. Mullen; J. Yoon; R. Armoni; Daniel Geist; Yaron Wolfsthal

This paper describes a system (AVPGEN) for generating tests (called architecture verification programs or AVPs) to check the conformance of processor designs to the specified architecture. To generate effective tests, AVPGEN uses novel concepts like symbolic execution and constraint solving, along with various biasing techniques. Unlike many earlier systems that make biased random choices, AVPGEN often chooses intermediate or final values and then solves for initial values that can lead to the desired values. A language called SIGL (symbolic instruction graph language) is provided in AVPGEN for the user to specify templates with symbolic constraints. The combination of user-specified constraints and the biasing functions is used to focus the tests on conditions that are interesting in that they are likely to activate various kinds of bugs. The system has been used successfully to debug many S/390 processors and is an integral part of the design process for these processors. >


international test conference | 1988

Delay test generation. I. Concepts and coverage metrics

Vijay S. Iyengar; Barry K. Rosen; Ilan Y. Spillinger

An approach to test for delay faults is presented. A variable size delay fault model is used to represent these failures. The nominal gate delays with the manufacturing tolerances are an integral part of the model and are used in the propagation of simplified waveforms through the logic network. The faulty waveforms are functions of the variable-size delay fault. For each fault and test pattern, a threshold is computed such that this fault is detected if its size exceeds epsilon . This threshold is used (along with the minimum slack at the fault site) to determine a metric called quality. The quality of detection for a fault measures how close the test came to exposing the ideally smallest-size fault at that point. This metric (together with the traditional fault coverage) gives a complete measure of the goodness of the test.<<ETX>>


IEEE Transactions on Computers | 1988

Timing analysis using functional analysis

Daniel Brand; Vijay S. Iyengar

The usual block-oriented timing analysis for logic circuits does not take into account functional relations between signals. If functional relations are taken into consideration, it could be found that a long path is never activated. This results in more accurate delays. A comparison is made of three arrival time functions, A, B, and R. A is the arrival time as given by exhaustive simulation; B is the arrival time as calculated by a usual block-oriented algorithm; and R is the arrival time, that does functional analysis. It is shown that B contained in R contained in A. The first relation means that R is never more conservative than B and whenever the containment is proper, R is an improvement over B. The second relation means that R is correct in the sense that it will never assert a signal to be valid when it is not valid according to the ideal A. Experimental results showing how often R is an improvement over B are presented. >


high-performance computer architecture | 1996

Representative traces for processor models with infinite cache

Vijay S. Iyengar; Louise H. Trevillyan; Pradip Bose

Performance evaluation of processor designs using dynamic instruction traces is a critical part of the iterative design process. The widening gap between the billions of instructions in such traces for benchmark programs and the throughput of timers performing the analysis in the tens of thousands of instructions per second has led to the use of reduced traces during design. This opens up the issue of whether these traces are truly representative of the actual workload in these benchmark programs. The first key result in this paper is the introduction of a new metric, called the R-metric, to evaluate the representativeness of these reduced traces when applied to a wide class of processor designs. The second key result, is the development of a novel graph-based heuristic to generate reduced traces based on the notions incorporated in the metric. These ideas have been implemented in a prototype system (SMART) for generating representative and reduced traces. Extensive experimental results are presented on various benchmarks to demonstrate the quality of the synthetic traces and the uses of the R-metric.


international test conference | 1988

Delay test generation. II. Algebra and algorithms

Vijay S. Iyengar; Barry K. Rosen; Ilan Y. Spillinger

For pt.I see ibid., p.857-66 (1988). A novel algebra is introduced for delay test generation. The algebra combines the nine natural logic values (00 , 01, 0X, 10, 11, 1X, X1, XX) with special attributes that record both heuristic choices and whatever information about waveforms is deducible algebraically (i.e. without numerical computations using actual gate delays). A test generator uses this algebra in an efficiently organized backtrack search. The test generator is linked to a delay fault simulator. Previous event-driven simulators have considered different types of events; one type of event is a change in faultless values from one test to another test, and the other type of event is a difference between faulty and faultless values. The presented simulator is driven by both types of events. Each generated test is simulated to determine the quality of detection.<<ETX>>


knowledge discovery and data mining | 2004

On detecting space-time clusters

Vijay S. Iyengar

Detection of space-time clusters is an important function in various domains (e.g., epidemiology and public health). The pioneering work on the spatial scan statistic is often used as the basis to detect and evaluate such clusters. State-of-the-art systems based on this approach detect clusters with restrictive shapes that cannot model growth and shifts in location over time. We extend these methods significantly by using the flexible square pyramid shape to model such effects. A heuristic search method is developed to detect the most likely clusters using a randomized algorithm in combination with geometric shapes processing. The use of Monte Carlo methods in the original scan statistic formulation is continued in our work to address the multiple hypothesis testing issues. Our method is applied to a real data set on brain cancer occurrences over a 19 year period. The cluster detected by our method shows both growth and movement which could not have been modeled with the simpler cylindrical shapes used earlier. Our general framework can be extended quite easily to handle other flexible shapes for the space-time clusters.

Collaboration


Dive into the Vijay S. Iyengar's collaboration.

Top Co-Authors

Avatar

Gabriel M. Silberman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gabriel M. Silberman

Technion – Israel Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge