Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tariq M. King is active.

Publication


Featured researches published by Tariq M. King.


international symposium on autonomous decentralized systems | 2007

Towards Self-Testing in Autonomic Computing Systems

Tariq M. King; Djuradj Babich; Jonatan Alava; Peter J. Clarke; Ronald Stevens

As researchers and members of the IT industry move towards a vision of computing systems that manage themselves, it is imperative to investigate ways to dynamically validate these systems to avoid the high cost of system failures. Although research continues to advance in many areas of autonomic computing, there is a lack of development in the area of testing these types of systems at runtime. Self-managing features in autonomic systems dynamically invoke changes to the structure and behavior of components that may already be operating in an unpredictable environment; further emphasizing the need for runtime testing. In this paper we propose a framework that dynamically validates changes in autonomic computing systems. Our framework extends the current structure of autonomic computing systems to include self-testing as an implicit characteristic. We validate our framework by creating a prototype of an autonomic container that incorporates the ability to self-test


Journal of Computers | 2007

An Integrated Self-Testing Framework for Autonomic Computing Systems

Tariq M. King; Alain E. Ramirez; Rodolfo Cruz; Peter J. Clarke

As the technologies of autonomic computing become more prevalent, it is essential to develop methodologies for testing their dynamic self-management operations. Self-management features in autonomic systems induce structural and behavioral changes to the system during its execution, which need to be validated to avoid costly system failures. The high level of automation in autonomic systems also means that human errors such as incorrect goal specification could yield potentially disastrous effects on the components being managed; further emphasizing the need for runtime testing. In this paper we propose a self-testing framework for autonomic computing systems to dynamically validate change requests. Our framework extends the current architecture of autonomic systems to include self-testing as an implicit characteristic, regardless of the self-management features being implemented. We validate our framework by creating a prototype of an autonomic system that incorporates the ability to self-test.


acm symposium on applied computing | 2008

A reusable object-oriented design to support self-testable autonomic software

Tariq M. King; Alain E. Ramirez; Peter J. Clarke; Barbara Quinones-Morales

As the enabling technologies of autonomic computing continue to advance, it is imperative for researchers to exchange the details of their proposed techniques for designing, developing, and validating autonoinic systems. Many of the software engineering issues related to building dependable autonomic systems can only be revealed by studying detailed designs and prototype implementations. In this paper we present a reusable object-oriented design for developing self-testable autonoinic software. Our design aims to reduce the effort required to develop autonomic systems that are capable of runtime testing. Furthermore, we provide low-level implementation details of a case study, Autonomic Job Scheduler (AJS), developed using the proposed design.


acm southeast regional conference | 2007

A self-testing autonomic container

Ronald Stevens; Brittany Parsons; Tariq M. King

Many strategies have been proposed to address the problems associated with managing increasingly complex computing systems. IBMs Autonomic Computing (AC) paradigm is one such strategy that seeks to alleviate system administrators from many of the burdensome tasks associated with manually managing highly complex systems. Researchers have been heavily investigating many areas of AC systems but there remains a lack of development in the area of testing these systems at runtime. Dynamic self-configuration, self-healing, self-optimizing, and self-protecting features of autonomic systems require that validation be an integral part of these types of systems. In this paper we propose a methodology for testing AC systems at runtime using copies of managed resources. We realize the architecture of a self-testing framework using a small AC system. Our system is based on the concept of an autonomic container, which is a data structure that possesses autonomic characteristics and added ability to self-test.


acm southeast regional conference | 2008

A self-testing autonomic job scheduler

Alain E. Ramirez; Barbara Morales; Tariq M. King

Although researchers have been exchanging ideas on the design and development of autonomic systems, there has been little emphasis on validation. In an effort to stimulate interest in the area of testing these self-managing systems, some researchers have developed lightweight prototypical applications to show the feasibility of dynamically validating runtime changes to autonomic systems. However, in order to reveal some of the greater challenges associated with building dependable autonomic systems, more complex prototype implementations must be developed and studied. In this paper we present implementation details of a self-testable autonomic job scheduling system, which was used as the basis for our investigation on testing autonomic systems.


Journal of Systems and Software | 2008

Analyzing clusters of class characteristics in OO applications

Peter J. Clarke; Djuradj Babich; Tariq M. King; B. M. Golam Kibria

The transition from Java 1.4 to Java 1.5 has provided the programmer with more flexibility due to the inclusion of several new language constructs, such as parameterized types. This transition is expected to increase the number of class clusters exhibiting different combinations of class characteristics. In this paper we investigate how the number and distribution of clusters are expected to change during this transition. We present the results of an empirical study were we analyzed applications written in both Java 1.4 and 1.5. In addition, we show how the variability of the combinations of class characteristics may affect the testing of class members.


computer software and applications conference | 2006

Automatic Validation of Java Page Flows Using Model-Based Coverage Criteria

Jonatan Alava; Tariq M. King; Peter J. Clarke

There continue to be advances in the automation of Web application development, however testing these applications remains mainly a manual process. In this paper we present a methodology to test page flows using traditional test coverage criteria in conjunction with an automated testing tool. The criteria is applied in the context of page flows and transformed into: all pages, all actions, all links, and all forwards. We define a state-based model of the application using information from the page flow and then use this model as the basis for generating a script to be executed by the automated testing tool. This finite state machine (FSM) also models the various combinations of inputs associated with the user interface of the application. During execution of the script, test cases are randomly generated using the FSM along with textual input from a pre-defined data pool. The adequacy of the test coverage based on the criteria for the page flow is determined by analyzing the elements covered during execution of the test script


Proceedings of the 9th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation - A-TEST 2018 | 2018

Abstract flow learning for web application test generation

Dionny Santiago; Peter J. Clarke; Patrick Alt; Tariq M. King

Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and the development of scripts to support automated regression testing. To keep pace with software evolution, test artifacts must also be frequently updated. Although test automation practices help mitigate the cost of regression testing, a large gap exists between the current paradigm and fully automated software testing. Researchers and practitioners are realizing the potential for artificial intelligence and machine learning (ML) to help bridge the gap between the testing capabilities of humans and those of machines. This paper presents an ML approach that combines a language specification that includes a grammar that can be used to describe test flows, and a trainable test flow generation model, in order to generate tests in a way that is trainable, reusable across different applications, and generalizable to new applications.


software engineering and knowledge engineering | 2008

A Meta-model to Support Regression Testing of Web Applications.

Yanelis Hernandez; Tariq M. King; Jairo Pava; Peter J. Clarke


Archive | 2014

Validating Autonomic Services: Challenges and Approaches

Tariq M. King; Peter J. Clarke; Mohammed Akour; Annaji Sharma Ganti

Collaboration


Dive into the Tariq M. King's collaboration.

Top Co-Authors

Avatar

Peter J. Clarke

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Alain E. Ramirez

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Djuradj Babich

Florida International University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdul Muqueet

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Annaji Sharma Ganti

North Dakota State University

View shared research outputs
Top Co-Authors

Avatar

Armando Barreto

Florida International University

View shared research outputs
Top Co-Authors

Avatar

B. M. Golam Kibria

Florida International University

View shared research outputs
Top Co-Authors

Avatar

Ben Wongsaroj

Florida Memorial University

View shared research outputs
Researchain Logo
Decentralizing Knowledge