Ahmad A. Al-Yamani
King Fahd University of Petroleum and Minerals
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ahmad A. Al-Yamani.
IEEE Transactions on Circuits and Systems | 2007
Ahmad A. Al-Yamani; Sundarkumar Ramsundar; Dhiraj K. Pradhan
Lithography-based integrated circuit fabrication is rapidly approaching its limit in terms of feature size. The current alternative is nanotechnology-based fabrication, which relies on self-assembly of nanotubes or nanowires. Such a process is subject to a high defect rate, which can be tolerated using carefully crafted defect tolerance techniques. This paper presents an algorithm for reconfiguration-based defect tolerance in nanotechnology switches. The algorithm offers an average switch density improvement of 50% to 100% to most recently published techniques. The algorithm is also consistent in improving the yield through minimizing false rejects as the results show over a large sample. The improvement percentage varies depending on the manufactured switch size and the desired defect-free size with the improvement in efficiency directly proportional to the size of the switch.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | 2007
Ahmad A. Al-Yamani; Narendra Devta-Prasanna; Erik Chmelar; Mikhail I. Grinchuk; Arun Gunda
This paper presents segmented addressable scan (SAS), a test architecture that addresses test data volume, test application time, test power consumption, and tester channel requirements using a hardware overhead of a few gates per scan chain. Using SAS, this paper also presents systematic scan reconfiguration, a test data compression algorithm that is applied to achieve 10times to 40 times compression ratios without requiring any information from the automatic-test-pattern-generation tool about the unspecified bits. The architecture and the algorithm were applied to both single stuck as well as transition fault test sets
congress on evolutionary computation | 2002
Ahmad A. Al-Yamani; Sadiq M. Sait; Hassan Barada
Parallelizing any algorithm on a cluster of heterogeneous workstations is not easy, as each workstation requires different wall clock time to execute the same instruction set. In this work, a parallel tabu search algorithm for heterogeneous workstations is presented using PVM. Two parallelization strategies, i.e., functional decomposition and multi-search thread strategies are integrated. The proposed algorithm is tested on the VLSI standard cell placement problem, however, the same algorithm can be used on any combinatorial optimization problem. The results are compared ignoring heterogeneity and are found to be superior in terms of execution time.
Iet Computers and Digital Techniques | 2009
Aiman H. El-Maleh; Mustafa I. Ali; Ahmad A. Al-Yamani
An effective reconfigurable broadcast scan compression scheme that employs partitioning of test sets and relaxation-based decomposition of test vectors is proposed. Given a constraint on the number of tester channels, the technique classifies test sets into acceptable and bottleneck vectors. The bottleneck vectors are then decomposed into a set of vectors that meets the given constraint. The acceptable and decomposed test vectors are partitioned into the smallest number of partitions while satisfying the tester channels constraint to reduce the decompressor area. Thus, the technique by construction satisfies a given tester channel constraint at the expense of an increased test vector count and number of partitions, offering a tradeoff between test compression, the test application time and the area of test decompression circuitry. Experimental results demonstrate that the proposed technique achieves better compression ratios compared with other techniques of test compression.
Journal of Heuristics | 2002
Ahmad A. Al-Yamani; Sadiq M. Sait; Habib Youssef; Hassan Barada
In this paper, we present the parallelization of tabu search on a network of workstations using PVM. Two parallelization strategies are integrated: functional decomposition strategy and multi-search threads strategy. In addition, domain decomposition strategy is implemented probabilistically. The performance of each strategy is observed and analyzed. The goal of parallelization is to speedup the search in finding better quality solutions. Observations support that both parallelization strategies are beneficial, with functional decomposition producing slightly better results. Experiments were conducted for the VLSI cell placement, an NP-hard problem, and the objective was to achieve the best possible solution in terms of interconnection length, timing performance (circuit speed), and area. The multiobjective nature of this problem is addressed using a fuzzy goal-based cost computation.
asia and south pacific design automation conference | 2007
Ahmad A. Al-Yamani; Narendra Devta-Prasanna; Arun Gunda
We present a new test data compression technique that achieves 10times to 40times compression ratios without requiring any information from the ATPG tool about the unspecified bits. The technique is applied to both single-stuck as well as transition fault test sets. The technique allows aggressive parallelization of scan chains leading to similar reduction in test time. It also reduces tester pins requirements by similar ratios. The technique is implemented using a hardware overhead of a few gates per scan chain.
asian test symposium | 2007
Aiman H. El-Maleh; Mustafa I. Ali; Ahmad A. Al-Yamani
An effective reconfigurable broadcast scan compression scheme that employs test set partitioning and relaxation-based test vector decomposition is proposed. Given a constraint on the number of tester channels, the technique classifies the test set into acceptable and bottleneck vectors. The bottleneck vectors are then decomposed into a set of vectors that meet the given constraint. The acceptable and decomposed test vectors are partitioned into the smallest number of partitions while satisfying the tester channels constraint to reduce the decompressor area. Thus, the technique by construction satisfies a given tester channels constraint at the expense of increased test vector count and number of partitions, offering a tradeoff between test compression, test application time and test decompression circuitry area. Experimental results demonstrate that the proposed technique achieves better compression ratios compared to other test compression techniques.
international symposium on quality electronic design | 2009
Costas Argyrides; Ahmad A. Al-Yamani; Carlos Arthur Lang Lisbôa; Luigi Carro; Dhiraj K. Pradhan
Future technologies, with ever shrinking devices and higher densities, bring along higher defect rates and lower yield. Memory chips, which are among the densest circuits used in digital systems, are greatly impacted by the increasing defect rates, which make yield fall and production costs rise sharply. In this paper, a new approach for designing memory chips to be manufactured using future technologies is proposed, aiming to increase the overall yield. The proposed approach trades a small area overhead for dramatic production cost reduction, by allowing to use more defective memory chips as lower capacity ones, instead of discarding them.
international symposium on quality electronic design | 2007
S Ramsundar; Ahmad A. Al-Yamani; Dhiraj K. Pradhan
Lithography based IC fabrication is rapidly approaching its limit in terms of feature size. The current alternative is nanotechnology based fabrication, which relies on self-assembly of nanotubes or nanowires. Such a process is subject to a high defect rate, which can be tolerated using carefully crafted defect tolerance techniques. This paper presents an algorithm for reconfiguration-based defect tolerance in nanotechnology switches. The algorithm offers an average switch density improvement of 50% to 100% to most recently published techniques
defect and fault tolerance in vlsi and nanotechnology systems | 2005
Ahmad A. Al-Yamani; Narendra Devta-Prasanna; Arun Gunda
This paper presents analysis of the trade off between hardware overhead, runtime, and test data volume when implementing systematic scan reconfiguration using centralized and distributed architectures of the segmented addressable scan, which is an Illinois-scan based architecture. The results show that the centralized scheme offers better data volume compression, similar ATPG runtime results and lower hardware overhead. The cost with the centralized scheme is in the routing congestion.