Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Farokh B. Bastani is active.

Publication


Featured researches published by Farokh B. Bastani.


IEEE Transactions on Software Engineering | 1982

Software Reliability—Status and Perspectives

C. V. Ramamoorthy; Farokh B. Bastani

It is essential to assess the reliability of digital computer systems used for critical real-time control applications (e.g., nuclear power plant safety control systems). This involves the assessment of the design correctness of the combined hardware/software system as well as the reliability of the hardware. In this paper we survey methods of determining the design correctness of systems as applied to computer programs.


IEEE Transactions on Software Engineering | 1981

Application of a Methodology for the Development and Validation of Reliable Process Control Software

C. V. Ramamoorthy; Yu-King R. Mok; Farokh B. Bastani; Gene H. Chin; Keiichi Suzuki

This paper discusses the necessity of a good methodology for the development of reliable software, especialy with respect to the final software validation and testing activities. A formal specification development and validation methodology is proposed. This methodology has been applied to the development and validation of a pilot software, incorporating typical features of critical software for nuclear power plant safety protection. The main features of the approach indude the use of a formal specification language and the independent development of two sets of specifications. Analyses on the specifications consists of three-parts: validation against the functional requirements consistency and integrity of the specifications, and dual specification comparison based on a high-level symbolic execution technique. Dual design, implementation, and testing are performed. Automated tools to facilitate the validation and testing activities are developed to support the methodology. These includes the symbolic executor and test data generator/dual program monitor system. The experiences of applying the methodology to the pilot software are discussed, and the impact on the quality of the software is assessed.


IEEE Transactions on Knowledge and Data Engineering | 2007

A Flexible Content Adaptation System Using a Rule-Based Approach

Jiang He; Tong Gao; Wei Hao; I-Ling Yen; Farokh B. Bastani

Content adaptation is an important technique for mobile devices. Existing content adaptation systems have been developed with specific adaptation goals. In this paper, we present an extensible content adaptation system, Xadaptor. We take a rule-based approach to facilitate extensible, systematic, and adaptive content adaptation. It integrates adaptation mechanisms for various content types and organizes them into the rule base. Rules are invoked based on the individual client information. We classify HTML page objects into structure, content, and pointer objects. Existing content adaptation techniques mainly focus on content objects and do not consider adaptation for structure and pointer objects. In Xadaptor, novel adaptation techniques for the structure object HTML table have been developed. We use fuzzy logic to model the adaptation quality and guide the adaptation decision. To demonstrate the feasibility of our approach, we have implemented a prototype system. Experimental studies show that Xadaptor is capable of on-the-fly content adaptation and is easily extensible


Bioinformatics | 2004

A dynamically growing self-organizing tree (DGSOT) for hierarchical clustering gene expression profiles

Feng Luo; Latifur Khan; Farokh B. Bastani; I-Ling Yen; Jizhong Zhou

MOTIVATION The increasing use of microarray technologies is generating large amounts of data that must be processed in order to extract useful and rational fundamental patterns of gene expression. Hierarchical clustering technology is one method used to analyze gene expression data, but traditional hierarchical clustering algorithms suffer from several drawbacks (e.g. fixed topology structure; mis-clustered data which cannot be reevaluated). In this paper, we introduce a new hierarchical clustering algorithm that overcomes some of these drawbacks. RESULT We propose a new tree-structure self-organizing neural network, called dynamically growing self-organizing tree (DGSOT) algorithm for hierarchical clustering. The DGSOT constructs a hierarchy from top to bottom by division. At each hierarchical level, the DGSOT optimizes the number of clusters, from which the proper hierarchical structure of the underlying dataset can be found. In addition, we propose a new cluster validation criterion based on the geometric property of the Voronoi partition of the dataset in order to find the proper number of clusters at each hierarchical level. This criterion uses the Minimum Spanning Tree (MST) concept of graph theory and is computationally inexpensive for large datasets. A K-level up distribution (KLD) mechanism, which increases the scope of data distribution in the hierarchy construction, was used to improve the clustering accuracy. The KLD mechanism allows the data misclustered in the early stages to be reevaluated at a later stage and increases the accuracy of the final clustering result. The clustering result of the DGSOT is easily displayed as a dendrogram for visualization. Based on a yeast cell cycle microarray expression dataset, we found that our algorithm extracts gene expression patterns at different levels. Furthermore, the biological functionality enrichment in the clusters is considerably high and the hierarchical structure of the clusters is more reasonable. AVAILABILITY DGSOT is available upon request from the authors.


reliability and maintainability symposium | 1994

Reliability of systems with fuzzy-failure criterion

Farokh B. Bastani; Ing-Ray Chen; Tawei Tsao

In many situations, such as robot path planning and automated manufacturing systems, the output for a given input cannot be simply classified as being either correct (no failure) or incorrect (failure). Instead of an ad hoc binary classification of the correctness of the output, it is more intuitive to use a fuzzy set based classification scheme. One approach is to let the value of the fuzzy set membership function denote the degree of acceptability (i.e., correctness) of the output. In this paper, we investigate several such fuzzy-failure criteria and their effects on system reliability in embedded computer systems. We first model the fuzzy output level of a response to a sensor event as a random variable in the range of [0,1] with 0 indicating that the output is completely acceptable and 1 indicating that the output is completely unacceptable. Then we derive analytical expressions for the reliability of systems for various fuzzy-failure criteria and compare their numerical solutions. We conclude that the reliability of such systems depends significantly on the fuzzy-failure criterion defined by the system designer.<<ETX>>


IEEE Transactions on Software Engineering | 1988

A class of inherently fault tolerant distributed programs

Farokh B. Bastani; I-Ling Yen; Ing-Ray Chen

Software for industrial process-control systems, such as nuclear power plant safety control systems and robots, can be very complex because of the large number of cases that must be considered. A design approach is proposed that uses decentralized control concepts, and is based on E.W. Dijkstras concept of self-stabilizing systems (1974). This method greatly simplifies the software, so that its correctness can be verified more easily. A simple control system is described for a simulated robot that is tolerant of partial failure of controllers and mechanisms, and permits online repair and enhancement of the control functions. >


international conference on tools with artificial intelligence | 2004

An effective support vector machines (SVMs) performance using hierarchical clustering

Mamoun Awad; Latifur Khan; Farokh B. Bastani; I-Ling Yen

The training time for SVMs to compute the maximal marginal hyper-plane is at least O(N/sup 2/) with the data set size N, which makes it nonfavorable for large data sets. This work presents a study for enhancing the training time of SVMs, specifically when dealing with large data sets, using hierarchical clustering analysis. We use the dynamically growing self-organizing tree (DGSOT) algorithm for clustering because it has proved to overcome the drawbacks of traditional hierarchical clustering algorithms. Clustering analysis helps find the boundary points, which are the most qualified data points to train SVMs, between two classes. We present a new approach of combination of SVMs and DGSOT, which starts with an initial training set and expands it gradually using the clustering structure produced by the DGSOT algorithm. We compare our approach with the Rocchio Bundling technique in terms of accuracy loss and training time gain using two benchmark real data sets.


international symposium on object component service oriented real time distributed computing | 2002

A component-based approach for embedded software development

I-Ling Yen; Jayabharath Goluguri; Farokh B. Bastani; Latifur Khan; John Linn

The rapid growth in the demand of embedded systems and the increased complexity of embedded software pose an urgent need for advanced embedded software development techniques. Software technology is shifting toward semi-automated code generation and integration of systems from components. Component-based development (CBD) techniques can significantly reduce the time and cost for developing software systems. However, there are some difficult problems with the CBD approach. Component identification and retrieval as well as component composition require extensive knowledge of the components. Designers need to go through a steep learning curve in order to effectively compose a system out of available components. We discuss an integrated mechanism for component-based development of embedded software. We develop an On-line Repository for Embedded Software (ORES) to facilitate component management and retrieval. ORES uses an ontology-based approach to facilitate repository browsing and effective search. Based on ORES, we develop the code template approach to facilitate semi-automated component composition. A code template can be instantiated by different sets of components and, thus, offers more flexibility and configurability and better reuse. Another important aspect in embedded software is the nonfunctional requirements and properties. In ORES, we capture nonfunctional properties of components and provide facilities for the analysis of overall system properties.


IEEE Transactions on Reliability | 1991

Effect of artificial-intelligence planning-procedures on system reliability

Ing-Ray Chen; Farokh B. Bastani

For an embedded real-time process-control system incorporating artificial-intelligence programs, the system reliability is determined by both the software-driven response computation time and the hardware-driven response execution time. A general model, based on the probability that the system can accomplish its mission under a time constraint without incurring failure, is proposed to estimate the software/hardware reliability of such a system. The factors which influence the proposed reliability measure are identified, and the effects of mission time, heuristics and real-time constraints on the system reliability with artificial-intelligence planning procedures are illustrated. An optimal search procedure might not always yield a higher reliability than that of a nonoptimal search procedure. Hence, design parameters and conditions under which one search procedure is preferred over another, in terms of improved software/hardware reliability, are identified. >


International Journal on Artificial Intelligence Tools | 2008

EMPIRICAL ASSESSMENT OF MACHINE LEARNING BASED SOFTWARE DEFECT PREDICTION TECHNIQUES

Venkata U. B. Challagulla; Farokh B. Bastani; I-Ling Yen; Raymond A. Paul

Automated reliability assessment is essential for systems that entail dynamic adaptation based on runtime mission-specific requirements. One approach along this direction is to monitor and assess the system using machine learning-based software defect prediction techniques. Due to the dynamic nature of software data collected, Instance-based learning algorithms are proposed for the above purposes. To evaluate the accuracy of these methods, the paper presents an empirical analysis of four different real-time software defect data sets using different predictor models. The results show that a combination of 1R and Instance-based learning along with Consistency-based subset evaluation technique provides a relatively better consistency in achieving accurate predictions as compared with other models. No direct relationship is observed between the skewness present in the data sets and the prediction accuracy of these models. Principal Component Analysis (PCA) does not show a consistent advantage in improving the...

Collaboration


Dive into the Farokh B. Bastani's collaboration.

Top Co-Authors

Avatar

I-Ling Yen

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Jicheng Fu

University of Central Oklahoma

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liangliang Xiao

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Yansheng Zhang

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Hui Ma

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Zhu

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Bojan Cukic

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Dongfeng Wang

University of Texas at Dallas

View shared research outputs
Researchain Logo
Decentralizing Knowledge