Art Vandenberg
Georgia State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Art Vandenberg.
Computer Communications | 2008
Chao Xie; Guihai Chen; Art Vandenberg; Yi Pan
Modeling peer-to-peer (P2P) networks is a challenge for P2P researchers. In this paper, we provide a detailed analysis of large-scale hybrid P2P overlay network topology, using Gnutella as a case study. First, we re-examine the power-law distributions of the Gnutella network discovered by previous researchers. Our results show that the current Gnutella network deviates from the earlier power-laws, suggesting that the Gnutella network topology may have evolved a lot over time. Second, we identify important trends with regard to the evolution of the Gnutella network between September 2005 and February 2006. Upon analyzing the limitations of the power-laws, we provide a novel two-layered approach to study the topology of the Gnutella network. We divide the Gnutella network into two layers, namely the mesh and the forest, to model the hybrid and highly dynamic architecture of the current Gnutella network. We give a detailed analysis of the two-layered overlay and present six power-laws and one empirical law to characterize the topology. Using the two-layered approach and laws proposed, realistic topologies can be generated and the realism of artificial topologies can be validated.
systems man and cybernetics | 2006
Jianghua Liang; Vijay K. Vaishnavi; Art Vandenberg
Directories provide a well-defined general mechanism for describing organizational resources such as the resources of the Internet2 higher education research community and the Grid community. Lightweight directory access protocol directory services enable data sharing by defining the informations metadata (schema) and access protocol. Interoperability of directory information between organizations is increasingly important. Improved discovery of directory schemas across organizations, better presentation of their semantic meaning, and fast definition and adoption (reuse) of existing schemas promote interoperability of information resources in directories. This paper focuses on the discovery of related directory object class schemas and in particular on clustering schemas to facilitate discovering relationships and so enable reuse. The results of experiments exploring the use of self-organizing maps (SOMs) to cluster directory object classes at a level comparable to a set of human experts are presented. The results show that it is possible to discover the values of the parameters of the SOM algorithm so as to cluster directory metadata at a level comparable to human experts
systems man and cybernetics | 2010
Naixue Xiong; Athanasios V. Vasilakos; Laurence T. Yang; Yi Pan; Cheng-Xiang Wang; Art Vandenberg
With the ever-increasing wireless/wired data applications recently, considerable efforts have focused on the design of distributed explicit rate flow control schemes for multi-input-multi-output service. This paper describes two novel wireless/wired multipoint-to-multipoint multicast flow control schemes, which are based on the distributed self-tuning proportional integrative plus derivative (SPID) controller and distributed self-tuning proportional plus integrative (SPI) controller, respectively. The control parameters can be designed to ensure the stability of the control loop in terms of source rate. The distributed explicit rate SPID and SPI controllers are located at the wireless/wired multipoint-to-multipoint multicast source to regulate the transmission rate. We further analyze the theoretical aspects of the proposed algorithm, and show how the control mechanism can be used to design a controller to support wireless/wired multipoint-to-multipoint multicast transmissions. Simulation results demonstrate the efficiency of the proposed scheme in terms of system stability, fast response, low packet loss, and high scalability, and the results also show SPID scheme has better performance than SPI scheme, however, SPID scheme requires more computing time and CPU resource.
acm southeast regional conference | 2005
Lei Li; Roop G. Singh; Guangzhi Zheng; Art Vandenberg; Vijay K. Vaishnavi; Shamkant B. Navathe
Semantic heterogeneity is becoming increasingly prominent in bioinformatics domains that deal with constantly expanding, dynamic, often very large, datasets from various distributed sources. Metadata is the key component for effective information integration. Traditional approaches for reconciling semantic heterogeneity use standards or mediation-based methods. These approaches have had limited success in addressing the general semantic heterogeneity problem and by themselves are not likely to succeed in bioinformatics domains where one faces the additional complexity of keeping pace with the speed at which data and semantic heterogeneity is being generated. This paper presents a methodology for reconciliation of semantic heterogeneity of metadata in bioinformatics data sources. The approach is based on the proposition that by globally monitoring, clustering, and visualizing bioinformatics metadata across disparately created data sources, patterns of practice can be identified. This can facilitate semantic reconciliation of metadata in current data and mitigate semantic heterogeneity in future data by promoting sharing and reuse of existing metadata. To instantiate the methodology, a research architecture, MicroSEEDS, is presented and its implementation and envisioned uses are discussed.
design science research in information systems and technology | 2009
Vijay K. Vaishnavi; Art Vandenberg; Yanqing Zhang; Saravanaraj Duraisamy
A practical and scalable web mining solution is needed that can assist the user in processing existing web-based resources to discover specific, relevant information content. This is especially important for researcher communities where data deployed on the World Wide Web are characterized by autonomous, dynamically evolving, and conceptually diverse information sources. The paper describes a systematic design research study that is based on prototyping/evaluation and abstraction using existing and new techniques incorporated as plug and play components into a research workbench. The study investigates an approach, DISCOVERY, for using (1) context/perspective information and (2) social networks such as ODP or Wikipedia for designing practical and scalable human-web systems for finding web pages that are relevant and meet the needs and requirements of a user or a group of users. The paper also describes the current implementation of DISCOVERY and its initial use in finding web pages in a targeted web domain. The resulting system arguably meets the common needs and requirements of a group of people based on the information provided by the group in the form of a set of context web pages. The system is evaluated for a scenario in which assistance of the system is sought for a group of faculty members in finding NSF research grant opportunities that they should collaboratively respond to, utilizing the context provided by their recent publications.
ieee international conference on cloud computing technology and science | 2011
Naixue Xiong; Andy Rindos; Michael L. Russell; Kelly P. Robinson; Art Vandenberg; Yi Pan
In this paper, we first propose a system model for a multi-cloud environment, which we define as a network of clouds that may interoperate to serve their individual user bases in each local intra-cloud. Local clouds may share services among other clouds in order to load balance or meet peak demands. Using this model, we propose an effective proportional and integral feedback control scheme based on control theory to share limited resources to satisfy requirements of multiple users wanting fast response and/or high utilisation of cloud resources. Our control scheme is based on a self-tuning feedback theory, which considers not only the actual versus target value of resource utilisation, but also the history of application computing rates. After that, we provide a theoretical analysis of the system stability and give guidelines for selection of feedback control parameters to stabilise the resource utilisation at a desirable target level. Simulations have been conducted to demonstrate that this proposed scheme can be an effective multi-cloud computing controller for ensuring fast response and high resource utilisation. Finally, we also analysed the distribution rate of Georgia State University student technology fee, and it is required that VCL could use fewer fees to support more required service.
international parallel and distributed processing symposium | 2005
Nova Ahmed; Yi Pan; Art Vandenberg
The multiple genome sequence alignment problem falls in the domain of problems that can be parallelized to address large sequence lengths. Although there is communication required for the computation of the aligned sequences, the proper distribution can reduce the overall problem to a set of tasks to be solved independently and then merged. A parallel algorithm for the alignment of multiple genome sequences is described. The algorithm is experimentally evaluated in a distributed Grid environment that provides very scalable and low cost computation performance. The Grid environment is evaluated with respect to a traditional cluster environment and results are compared to evaluate the effectiveness of a Grid environment for large computational biology.
ieee international conference on cloud computing technology and science | 2013
Christopher J. Davia; Stan Gowen; Ginny Ghezzo; Ramon Harris; Maritta Horne; Clayton Potter; Sharon P. Pitt; Art Vandenberg; Naixue Xiong
Cloud computing represents a new field where computing resources can be provided as services and accessed by others from anywhere in the world via internet. Cloud computing services and architecture for education are characterised as being fully managed by the universities, provided on demand, and being elastic as users have as much service as they need in a particular moment. This paper is intended as a resource for institutions that are assessing their institutional capacity and readiness for cloud solutions. Cloud computing is a broad range of concepts and the distinction between ‘consumer of’ and ‘provider of’ cloud-based resources may be important in creating a larger ecosystem of cloud computing. Several members of the IBM Cloud Academy have outlined a definition and framework for cloud computing services and have drafted a cloud assessment survey for senior leadership planning cloud initiatives. Three case studies are presented on cloud solutions for K-12 and higher education along with survey results for the three case studies. The conclusion summarises outcomes and considers next steps for IBM Cloud Academy members.
granular computing | 2006
Lei Li; Vijay K. Vaishnavi; Art Vandenberg
Directories play an important role in describing resources and enabling information sharing within and among organizations. To communicate effectively, directories must resolve differing structures and vocabularies. This paper proposes a systematic approach to address the interoperability of directories. The approach couples a genetic algorithm with a neural network based clustering algorithm - Self-Organizing Maps (SOM) - to systematically cluster directory metadata, highlight similar structures, recognize developing patterns of practice, and ultimately promote homogeneity among the directories. To evaluate the effectiveness of the proposed approach, an experiment on Lightweight Directory Access Protocol (LDAP) directory metadata is conducted. The experimental results show that a genetic algorithm can discover parameter values for a SOM algorithm such that the computer clustering results are comparable to that of domain experts. The proposed approach provides an effective mechanism to systematically cluster directory metadata and promote homogeneity among them.
ieee international conference on cloud computing technology and science | 2010
Hui Chen; Chunjie Zhou; Yuanqing Qin; Art Vandenberg; Athanasios V. Vasilakos; Naixue Xiong
The Industrial Ethernet is promising for the implementation of a Cloud Computing based control system. However, numerous standard organizations and vendors have developed various Industrial Ethernets to satisfy the real-time requirements of field devices. This paper presents a real-time reconfigurable protocol stack to cope with this challenge, by introducing the architecture with a core of dynamic routing and autonomic local scheduling. It is based on the deterministic and stochastic Petri-Nets (DSPN) method to illustrate the performance of producer/consumer based application model, CSMA/CD based node accessing activities, and TDMA based resource allocation for real time and non-real time traffic. Furthermore, the predicted time distribution for evaluating the stability of a control system can be obtained from the proposed DSPN model. It is shown that the DSPN modeling yields good verification analysis and performance prediction results through a real experimentation.