Martin D. Fraser
Georgia State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin D. Fraser.
IEEE Transactions on Software Engineering | 1991
Martin D. Fraser; Kuldeep Kumar; Vijay K. Vaishnavi
The differences between informal and formal requirements specification languages are noted, and the issue of bridging the gap between them is discussed. Using structured analysis (SA) and the Vienna development method (VDM) as surrogates for informal and formal languages, respectively, two approaches are presented for integrating the two. The first approach uses the SA model of a system to guide the analysts understanding of the system and the development of the VDM specifications. The second approach proposes a rule-based method for generating VDM specifications from a set of corresponding SA specifications. The two approaches are illustrated through a simplified payroll system case. The issues that emerge from the use of the two approaches are reported. >
IEEE Transactions on Neural Networks | 2000
Yan-Qing Zhang; Martin D. Fraser; Ross A. Gagliano; Abraham Kandel
In this paper, we present a neural-networks-based knowledge discovery and data mining (KDDM) methodology based on granular computing, neural computing, fuzzy computing, linguistic computing, and pattern recognition. The major issues include 1) how to make neural networks process both numerical and linguistic data in a data base, 2) how to convert fuzzy linguistic data into related numerical features, 3) how to use neural networks to do numerical-linguistic data fusion, 4) how to use neural networks to discover granular knowledge from numerical-linguistic data bases, and 5) how to use discovered granular knowledge to predict missing data. In order to answer the above concerns, a granular neural network (GNN) is designed to deal with numerical-linguistic data fusion and granular knowledge discovery in numerical-linguistic databases. From a data granulation point of view, the GNN can process granular data in a database. From a data fusion point of view, the GNN makes decisions based on different kinds of granular data. From a KDDM point of view, the GNN is able to learn internal granular relations between numerical-linguistic inputs and outputs, and predict new relations in a database. The GNN is also capable of greatly compressing low-level granular data to high-level granular knowledge with some compression error and a data compression rate. To do KDDM in huge data bases, parallel GNN and distributed GNN will be investigated in the future.
Communications of The ACM | 1995
Ross A. Gagliano; Martin D. Fraser; Mark E. Schaefer
Standard methods for allocating computing resources normally employ schedulers and either queue or priority schemes. Alternative methods utilizing marketlike processes are being investigated, with direct applicability to evolving distributed systems. In this article, we present results of simulations of an auction allocation is which computing tasks are provided sufficient intelligence to acquire resources by offering, bidding, and exchanging them for funds.
Communications of The ACM | 1997
Martin D. Fraser; Vijay K. Vaishnavi
Industry needs do not stop at providing assurances of the quality of software products. Software development organizations should achieve an industry-level understanding of the process of software development so that successful software products are viewed as defined, repeatable, and managed products that can be planned for and expected. Consequently, process-oriented approaches to assuring software quality and reliability are gaining attention. For example, Bowen [1] recommends: “It is important that standards should not be over-prescriptive... . Ideally, dependability goals should be set and the onus should be on the software suppliers that their methods achieve the required level of confidence.” Table 1 gives a brief glossary of some relevant terms. The proposed measurement model evaluates a capability maturity of requirements specifications processes. Requirements specifications are fundamental in the process-oriented approach to achieving Software quality is an issue of generally recognized importance. We propose a measurement model to determine the capability maturity levels of formal spec-
international parallel and distributed processing symposium | 2002
Ajay K. Katangur; Yi Pan; Martin D. Fraser
Multistage Interconnection Networks (MIN) is popular in switching and communication applications. A major problem called Cross talk is introduced by optical MIN, which is caused by coupling two signals within a switching element. In this paper, we focus on an efficient solution to avoid cross talk, which is routing traffic through an N × N optical network to avoid coupling two signals within each switching element. Under the constraint of avoiding cross talk, what we are interested is how to realize a permutation that will use the minimum number of passes. This routing problem is an NP-hard problem. Many heuristic algorithms are designed by many researchers to perform this routing such as sequential algorithm, degree-descending algorithm, etc. The Simulated Annealing algorithm is used in this research to improve the performance of solving the problem and optimizing the result. Many cases are tested and the results are compared to the results of other algorithms to show the advantages of good quality solution and short execution time of Simulated Annealing algorithm.
Optical Engineering | 2004
Ajay K. Katangur; Yi Pan; Martin D. Fraser
Multistage interconnection networks (MINs) are popular in switching and communication applications and have been used in tele- communication and parallel computing systems for many years. Crosstalk a major problem introduced by an optical MIN, is caused by coupling two signals within a switching element. We focus on an efficient solution to avoiding crosstalk by routing traffic through anN3N optical network to avoid coupling two signals within each switching element us- ing wavelength-division multiplexing (WDM) and a time-division ap- proach. Under the constraint of avoiding crosstalk, the interest is on realizing a permutation that uses the minimum number of passes for routing. This routing problem is an NP-hard problem. Many heuristic al- gorithms are already designed by researchers to perform this routing such as a sequential algorithm, a degree-descending algorithm, etc. The genetic algorithm is used successfully to improve the performance over the heuristic algorithms. The drawback of the genetic algorithm is its long running times. We use the simulated annealing algorithm to improve the performance of solving the problem and optimizing the result. In addition, a wavelength lower bound estimate on the minimum number of passes required is calculated and compared to the results obtained using heu- ristic, genetic, and simulated annealing algorithms. Many cases are tested and the results are compared to the results of other algorithms to show the advantages of simulated annealing algorithm.
acm southeast regional conference | 1998
Vijay K. Vaishnavi; Martin D. Fraser
procedures for the internal validation of a maturity measurement model for formal specification processes and for applying the measurement model to the broad class of cases needed for the external validation of this model are specified. These procedures are organized into a validation framework that bridges the gap between measuring published cases and live projects and that assures that the next phase of validation with live cases will be based on measurements comparable to the measurements obtained with published cases. The focus is on problems and techniques needed to plan the validation of the model.
annual simulation symposium | 1990
Martin D. Fraser; Ross A. Gagliano; Mark E. Schaefer
Modifying our previously developed simulation model [FRA89], we study in this paper the costs associated with distributed allocation of computing resources in a multitasking environment. Using funds endowed upon arrival, computing tasks compete for necessary resources through sealed-bid auctions to improve their processing schedules. The costs and times dedicated to auctioning are compared to the costs and times allowed for task processing. Measuring computing resources in terms of processing rates allows the task management, in the form of an auction, algorithm, to have its requirements specified in the same way as the requirements for the simulated mission processing. Machine capacity is computed for and assigned to each completing task. Data are then compiled by segmented capacity classes. A unifying theme of past and current research is the efficiency of auctioning to allocate reconfigurable computing resources in a variable capacity machine. We observed that at optimal rates of occurrence of capacity classes which minimize the total costs per successful completion, congestion was resolved through auctions generating endogenously implied prices which substantially exceeded the exogenously imposed price.
annual simulation symposium | 1989
Martin D. Fraser; Ross A. Gagliano; Mark E. Schaefer
The allocation of computing resources and the scheduling of tasks in a multitasking environment are simulated using a distributed control model. The tasks compete for computing resources in a decentralized manner through sealed bid auctions to improve their schedules, rather than having resources centrally administered by a host controller. Funds used for bidding are endowed to the tasks upon arrival at the computing system. The effects on completion times of three endowment strategies and two machine sizes are analyzed using a range of system capacities. Within each capacity class, an apparent cost, derived from the run parameters, is contrasted with an implied price generated by the auction process. Performance is examined in terms of congestion at various capacities. At optimal (lowest cost per successful completion) rates of occurrence of these capacity classes, an implied price arises that exceeds the “free access” price. This internally generated price appears to ration resources and time, thus discouraging congestion. Implementing such a distributed control algorithm suggests that determining a price schedule for allocating computing resources can be moved “to the left” in the system life cycle.
Simulation | 1992
Ross A. Gagliano; Martin D. Fraser; Mark E. Schaefer
In this article, an alternative approach to control for computing systems, with possible distributed, parallel, or multilprocess application, is proposed and evaluated through simulation. Functions normally handled by centralized controllers, schedulers, arbiters and priority schemes are accomplished through a decentral ized model of control. Resource allocation, one important control function, is resolved within a Challenge Ring (CR) in which individual computing tasks independently (or without a host, hence their interaction is called hostless) exercise algorithms to gain access to computing resources. Simulated system performance is monitored by analyzing individual task processing times, total system times, resource availability, resource utilization, and system efficiency. Our preliminary experimental results indicate that such decentralized (or hostless) models can be superior to some standard centralized (or hosted) versions. Moreover, tasks in CR networks that interact through cooperative strategies in some cases exhibit better performance. Our overall results encourage the further exploration of decentralized control models which could be useful in the continuing pursuit of alternative machine constructs (e.g. non-von Neumann architectures) and new distributed operational schemes (e.g. hostless network operating systems).