José Salinas
Texas A&M University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by José Salinas.
vlsi test symposium | 1995
Tong Liu; Fabrizio Lombardi; José Salinas
This paper presents a generalized new approach for testing interconnects (for boundary scan architectures) as well as field programmable interconnect chips (FPICs). The proposed structural test method explicitly avoids aliasing and confounding and as applicable to dense as well as sparse layouts. The proposed method is applicable to both one-step and two-step test generation and diagnosis. Two algorithms with an execution complexity of O(n/sup 2/), where n is the number of nets in the interconnect, are given. Simulation results for benchmark and randomly generated layouts show a substantial reduction in the number of tests using the proposed approaches compared with previous approaches. The applicability of the proposed approach to FPICs is discussed and evaluated by simulation.
IEEE Transactions on Computers | 1996
José Salinas; Yinan N. Shen; Fabrizio Lombardi
This paper presents a new structural approach for test generation and diagnosis of interconnects (such as wiring networks). The proposed technique is based on computational geometry by considering the physical adjacencies of the nets in the layout. This information is used by a sweeping line technique for generating the test vectors. A realistic fault model in which nets can be bridged only if they are physically adjacent, is proposed. The proposed approach generates a set of initial local vectors for testing all the nets at the inputs. A different set of local vectors is then, generated by sweeping the layout at every point where two nets may intersect. The set of local vectors is generally sparse. So, a compaction algorithm is proposed for generating the final set. The proposed approach has an execution time of O((p+k) log p) for generating the local tests, where p is the maximum number of segments in the nets and k is the number of possible intersection points. It is proved that the problem of generating the minimum number of test vectors by compaction is NP-complete, but simulation results show that the proposed heuristic criteria are very efficient for a practical application. The extension of the proposed approach to other fault models and to other routing schemes (as applicable to PCB and VLSI) is presented.
defect and fault tolerance in vlsi and nanotechnology systems | 1996
Tong Liu; Xiao-Tao Chen; Fabrizio Lombardi; José Salinas
This paper presents a new approach to fault detection of interconnects; the novelty of the proposed approach is that test generation and scheduling are established using the physical characteristics of the layout of the interconnect under test. This includes critical area extraction and a realistic fault model for a structural methodology. Physical layout information is used to model the adjacencies in an interconnect and possible bridge faults by a novel weighted graph approach. This graph is then analyzed to appropriately schedule the order of test compaction and execution for (early) detection of bridge faults. Generation and compaction of the test vectors are accomplished by calculating node and edge weights of the new adjacency graph as figure of merit. The advantage of the proposed approach is that on average, early detection of faults is possible using a number of tests significantly smaller than with previous approaches. A further advantage is that it represents a realistic alternative to adaptive testing because it avoids costly on-line test generation, while still requiring a small number of vectors.
Proceedings of 1994 International Conference on Wafer Scale Integration (ICWSI) | 1994
José Salinas; Fabrizio Lombardi
This paper presents a new approach for diagnosing (detection and location) reconfigurable two-dimensional arrays. The proposed approach utilizes the augmented switching interconnection network (commonly found in reconfigurable arrays) as multiple parallel scan chains, such that controllability and observability of test vectors can be achieved for each cell. Arrays with homogeneous and nonhomogeneous cells (multipipeline) are analyzed. An example of the application of the proposed approach to an existing array architecture for image processing, is presented.<<ETX>>
international conference on parallel processing | 1993
José Salinas; Fabrizio Lombardi
This paper examines a fault tolerant scheme for two-dimensional arrays of processors which functionally reconfigures the array without the use of spares. Reconfiguration approaches for different interconnection networks are analyzed. Also, three approaches are proposed for mapping image data to and from the array, depending on the type of array and computational power available in each processing element. The proposed reconfiguration approaches have been emulated on a 32x64 processor MasPar array computer.
defect and fault tolerance in vlsi and nanotechnology systems | 1993
José Salinas; Fabrizio Lombardi
The authors examine the operation and a reconfiguration strategy for two-dimensional SIMD parallel architectures in the presence of manufacturing cluster defects and/or link defects when used for image processing. The proposed technique is based on a conceptual reconfiguration of processing elements by covering each large defect area with a set of fault-free elements, thus creating a loss of image resolution instead of a loss of image data. The proposed technique has been emulated on a 2048 PE MasPar architecture assuming both mesh connected elements (four-way connection) and eight-way connections.
Microprocessors and Microsystems | 1992
José Salinas; Fabrizio Lombardi
Abstract This paper presents an improved approach to data path testing for microprocessors. The proposed approach is based on the technique of Freeman1; it utilizes stimulus (S-) and fault (F-) paths in the functional description of a microprocessor to generate test vectors in a knowledge based testing system2 such that controllability/observability as well as any additional hardware for design-for-testability can be promptly assessed. The proposed approach to data path testing is used for testing the MC68000 microprocessor. A combination of fault models (functional, instruction and stuck-at) inclusive of a fault bound assumption is used to discriminate between control and non-control faults. A test set for the combinational functional units of the MC68000 is then generated and compared with a pseudo-exhaustive approach. A significant reduction in the number of required test vectors is accomplished.
international conference on engineering of complex computer systems | 1996
A.K. Ganesh; Thomas J. Marlowe; Alexander D. Stoyenko; Mohamed F. Younis; José Salinas
The overhead of the general checkpointing approach is infeasible for distributed real-time systems where timing is critical. We present a new compiler based approach which classifies data, and minimizes the data needed for checkpointing using static data flow analysis and language support. Our approach provides static guarantees of timeliness while checkpointing, and explores timely recovery for real-time systems. We outline our approach and discuss the necessary architecture and language support needed to make this feasible.
IFAC Proceedings Volumes | 1996
A.K. Ganesh; Thomas J. Marlowe; José Salinas; Alexander D. Stoyenko
Abstract A Fault-Tolerant Real-Time System must provide critical level of service in a timely manner in the presence of one or more hardware or software faults. This paper argues that support from the language, environment, and compiler is required. An integrated approach to providing this support through a novel data classification is proposed. This can in principle provide provide static guarantees of timeliness in a checkpointing real-time system, and for recovery and continued computation for up to one node or link failure.
Simulation Practice and Theory | 1994
Hannu Kari; José Salinas; Fabrizio Lombardi
Abstract This paper introduces a novel method for generating non-standard random distributions. These distributions are required when the system cannot be modelled accurately using conventional probabilistic distributions. In the proposed method, the desired random distribution is built by splitting the density function into smaller segments which are individually approximated with simple polynomial functions. Then, the inverse transformation method is used to form the final random distribution function. This paper presents also new criteria for expediting the selection of the correct segment. The complexity of the proposed process for segment selection is O(log M ), where M is the number of segments. An example of the application of the proposed method to simulation of disk access patterns for performance evaluation of computer systems is provided.