Kevin D. Reilly
University of Alabama at Birmingham
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kevin D. Reilly.
Information Sciences | 2007
Leonard J. Jowers; James J. Buckley; Kevin D. Reilly
In previous studies we first concentrated on utilizing crisp simu- lation to produce discrete event fuzzy systems simulations. Then we extended this research to the simulation of continuous fuzzy systems models. In this pa- per we continue our study of continuous fuzzy systems using crisp continuous simulation. Consider a crisp continuous system whose evolution depends on dierential equations. Such a system contains a number of parameters that must be estimated. Usually point estimates are computed and used in the model. However these point estimates typically have uncertainty associated with them. We propose to incorporate uncertainty by using fuzzy numbers as estimates of these unknown parameters. Fuzzy parameters convert the crisp system into a fuzzy system. Trajectories describing the behavior of the system become fuzzy curves. We will employ crisp continuous simulation to estimate these fuzzy trajectories. Three examples are discussed.
BMC Bioinformatics | 2006
Yonghui Chen; Kevin D. Reilly; Alan P. Sprague; Zhijie Guan
BackgroundProtein sequence clustering has been widely used as a part of the analysis of protein structure and function. In most cases single linkage or graph-based clustering algorithms have been applied. OPTICS (Ordering Points To Identify the Clustering Structure) is an attractive approach due to its emphasis on visualization of results and support for interactive work, e.g., in choosing parameters. However, OPTICS has not been used, as far as we know, for protein sequence clustering.ResultsIn this paper, a system of clustering proteins, SEQOPTICS (SEQuence clustering with OPTICS) is demonstrated. The system is implemented with Smith-Waterman as protein distance measurement and OPTICS at its core to perform protein sequence clustering. SEQOPTICS is tested with four data sets from different data sources. Visualization of the sequence clustering structure is demonstrated as well.ConclusionThe system was evaluated by comparison with other existing methods. Analysis of the results demonstrates that SEQOPTICS performs better based on some evaluation criteria including Jaccard coefficient, Precision, and Recall. It is a promising protein sequence clustering method with future possible improvement on parallel computing and other protein distance measurements.
soft computing | 2004
James J. Buckley; Kevin D. Reilly; Xidong Zheng
We apply our new approach to modeling uncertain probabilities to queuing theory and the optimal design of web servers. This involves using fuzzy, finite, regular Markov chains to determine the fuzzy steady state probabilities and then computing the fuzzy numbers for system performance. We first ignore revenues and costs in determining an optimal system and then we incorporate these factors for optimal design. Then we add two new phenomena associated with the web in our optimization models: “burstiness” and “long tailed distributions”.
Archive | 2005
Francisco Hernández; Purushotham Bangalore; Jeff Gray; Kevin D. Reilly
Grid computing aims at managing resources in a heterogeneous distributed environment. The Globus Toolkit provides a set of components that can be used to build Grid-enabled applications. Presently, applications are typically hand-crafted either by using a set of command line interfaces, or by using a set of Java packages provided by the Java CoG Kit. The purpose of this work is to introduce a high-level layer that abstracts and simplifies the development of applications within the Globus Toolkit context by creating graphical workflows of applications using domain-specific modeling techniques.
Concurrency and Computation: Practice and Experience | 2006
Francisco Hernández; Purushotham Bangalore; Jeff Gray; Zhijie Guan; Kevin D. Reilly
The Grid has proven to be a successful paradigm for distributed computing. However, constructing applications that exploit all the benefits that the Grid offers is still not optimal for both inexperienced and experienced users. Recent approaches to solving this problem employ a high‐level abstract layer to ease the construction of applications for different Grid environments. These approaches help facilitate construction of Grid applications, but they are still tied to specific programming languages or platforms. A new approach is presented in this paper that uses concepts of domain‐specific modeling (DSM) to build a high‐level abstract layer. With this DSM‐based abstract layer, the users are able to create Grid applications without knowledge of specific programming languages or being bound to specific Grid platforms. An additional benefit of DSM provides the capability to generate software artifacts for various Grid environments. This paper presents the Grid Automation and Generative Environment (GAUGE). The goal of GAUGE is to automate the generation of Grid applications to allow inexperienced users to exploit the Grid fully. At the same time, GAUGE provides an open framework in which experienced users can build upon and extend to tailor their applications to particular Grid environments or specific platforms. GAUGE employs domain‐specific modeling techniques to accomplish this challenging task. Copyright
International Journal of Parallel Programming | 1982
James W. Hooper; Kevin D. Reilly
Each discrete event simulation language incorporates a time control procedure to conduct timing management and next event selection. Each time control procedure embodies, and thus imposes, a strategy (approach, method) for next event selection- and thereby determines the world view of a language. The three generally recognized strategies are event scheduling, activity scanning and process interaction.This paper presents algorithmic formulations of the three strategies and their modeling routines, as well as detailed discussions and comparisons of the strategies. The algorithmic formulations serve to aid understanding by describing essential aspects of the strategies while excluding implementation details which are not strategy-dependent, and which tend to detract from the essential concepts.A significant practical application of the formulations is discussed. This consists of merging the algorithms for the event scheduling and process interaction strategies into one algorithm, which then served as a model for combining GPSS and GASP IV into a simulation system providing the individual capabilities of both language, and the capability to intermix GPSS and GASP within a single model.
technical symposium on computer science education | 1996
Anthony C. L. Barnard; Barrett R. Bryant; Warren T. Jones; Kevin D. Reilly
It k important that university curricula appropriately reflect the rapid advances taking place in computer science ([ Fole88], [Hart92]). The area of telecommunications and computer networks is undergoing particularly rapid change. A conference sponsored by BellSouth Foundation [Jone90] called attention to the importance of telecommunications to the computer field and the need to emphasize this area in coursework and laborate ries at both the undergraduate and graduate levels (see also [Jone91]). Similarly, an NSF report [N CR192] on research priorities in networking and communications expressed an urgent need for educational programs to keep up with the rapid advances that have occurred in this area.
International Journal of Approximate Reasoning | 1987
Akram Salah; Kevin D. Reilly
Abstract The simple production rule representation is generalized by adding programs to a management system that manipulate rules in a rule-based system. By adapting this methodology, a single generalized rule can represent a group of simple ones. Then programs are employed to satisfy the general rule in a partial way while recursively reducing a decision problem into smaller ones of the same nature until a decision is made. It is shown that the reduction method is more efficient than the simple rule approach and that it minimizes the number of rules used to express a problem. The concept of using a management program to manipulate a set of rules is emphasized through solving a problem in a differential diagnosis expert system. A comparison between the number of rules employed to express a problem is made to show advantages of the reduction methodology over the simple rule representation.
ACM Sigsoft Software Engineering Notes | 2005
Francisco Hernández; Purushotham Bangalore; Kevin D. Reilly
The present work describes an approach to simplifying the development and deployment of applications for the Grid. Our approach aims at hiding accidental complexities (e.g., low-level Grid technologies) met when developing these kinds of applications. To realize this goal, the work focuses on the development of end-user tools using concepts of domain engineering and domain-specific modeling which are modern software engineering methods for automating the development of software. This work is an attempt to contribute to the long term research goal of empowering users, to create complex applications for the Grid without depending on the expertise of support teams or on hand-crafted solutions.
International Journal of Developmental Neuroscience | 2002
Kevin D. Reilly
After evaluating general features and attributes of the agent notion, the overlap of features in candidate (attribute) cores, and several less central features, the paper addresses agent and related theory in neuroscience, observing how agent notions have penetrated portions of this field and how the field itself emphasizes and further develops some agent themes via, e.g. schema theory, neural net‐artificial intelligence (AI) comparisons, and other research. In remaining sections, models for development of memory strategies in children are presented, illustrating cooperative and competitive neural modeling agents, an active role for a “human agent in the loop,” and integrating broadly‐based neural network (NN) modeling with other bio‐inspired models.