Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert E. McGrath is active.

Publication


Featured researches published by Robert E. McGrath.


international world wide web conferences | 1994

A scalable HTTP server: the NCSA prototype

Eric Dean Katz; Michelle Butler; Robert E. McGrath

Abstract While the World Wide Web (www) may appear to be intrinsically scalable through the distribution of files across a series of decentralized servers, there are instances where this form of load distribution is both costly and resource intensive. In such cases it may be necessary to administer a centrally located and managed http server. Given the exponential growth of the internet in general, and www in particular, it is increasingly more difficult for persons and organizations to properly anticipate their future http server needs, both in human resources and hardware requirements. It is the purpose of this paper to outline the methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server. The implementation described in the following pages allows for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server. The key components of this configuration include: (1) cluster of identically configured http servers; (2) use of Round-Robin DNS for distributing http requests across the cluster; (3) use of distributed File System mechanism for maintaining a synchronized set of documents across the cluster; and (4) method for administering the cluster. The result of this design is that we are able to add any number of servers to the available pool, dynamically increasing the load capacity of the virtual server. Implementation of this concept has eliminated perceived and real vulnerabilities in our single-server model that had negatively impacted our user community. This particular design has also eliminated the single point of failure inherent in our single-server configuration, increasing the likelihood for continued and sustained availability. while the load is currently distributed in an unpredictable and, at times, deleterious manner, early implementation and maintenance of this configuration have proven promising and effective.


IEEE Computer | 1995

NCSA's World Wide Web server: design and performance

Thomas T. Kwan; Robert E. McGrath; Daniel A. Reed

To support continued growth, WWW servers must manage a multigigabyte (in some instances a multiterabyte) database of multimedia information while concurrently serving multiple request streams. This places demands on the servers underlying operating systems and file systems that lie far outside todays normal operating regime. Simply put, WWW servers must become more adaptive and intelligent. The first step on this path is understanding extant access patterns and responses. The article examines extant Web access patterns with the aim of developing more efficient file-caching and prefetching strategies. >


international provenance and annotation workshop | 2008

The Open Provenance Model: An Overview

Luc Moreau; Juliana Freire; Joe Futrelle; Robert E. McGrath; Jim Myers; Patrick R. Paulson

Provenance is well understood in the context of art or digital libaries, where it respectively refers to the documented history of an art object, or the documentation of processes in a digital objects life cycle. Interest for provenance in the e-science community [12] is also growing, since provenance is perceived as a crucial component of workflow systems that can help scientists ensure reproducibility of their scientific analyses and processes [2,4].


Knowledge Engineering Review | 2003

Use of ontologies in a pervasive computing environment

Anand Ranganathan; Robert E. McGrath; Roy H. Campbell; M. Dennis Mickunas

Ontologies are entering widespread use in many areas such as knowledge and content management, electronic commerce and the Semantic Web. In this paper we show how the use of ontologies has helped us overcome some important problems in the development of pervasive computing environments. We have integrated ontologies and Semantic Web technology into our pervasive computing infrastructure. Our investigations have shown that Semantic Web technology can be integrated into our CORBA-based infrastructure to augment several important services. This work suggests a number of requirements for future research in the development of ontologies, reasoners, languages and interfaces.


acm international conference on digital libraries | 1999

Digital library technology for locating and accessing scientific data

Robert E. McGrath; Joe Futrelle; Raymond Louis Plante; Damien Guillaume

In this paper we describe our efforts to bring scientific data into the digital library. This has required extension of the standard WWW, and also the extension of metadata standards far beyond the Dublin Core. Our system demonstrates this technology for real scientific data from astronomy.


human factors in computing systems | 2009

Species-appropriate computer mediated interaction

Robert E. McGrath

Given the importance of our non-human companions, do we not want to extend social media to our nonhuman co-species? If human computer interfaces should be designed for Anyone. Anywhere. (the theme of CHI 2001), then why not for all species? Recent pioneering efforts have shown that computer mediated interactions between humans and dogs, cats, chickens, cows, hamsters, and other species are technically possible. These efforts excite the imagination and challenge our understanding the basic nature of computer mediated interaction.


international conference on supercomputing | 1988

Using memory in the cedar system

Robert E. McGrath; Perry A. Emrath

The design of the virtual memory system for the Cedar multiprocessor under construction at the University of Illinois is discussed. The Cedar architecture features a hierarchy of memory, some shared by all processors, and some shared by subsets of processors. The Xylem operating system is based on Alliant Computer Systems CONCENTRIXTM operating system, which is based on 4.2BSD UNIXTM. Xylem supports multi-tasking and demand paging of parts of the memory hierarchy into a linear virtual address space. Memory may be private to a task or shared between all the tasks. The locality and attributes of a page may be modified during the execution of a program. Examples of how these mechanisms can be used are discussed.


international middleware conference | 2009

Semantic middleware for e-science knowledge spaces

Joe Futrelle; Jeff Gaynor; Joel Plutchak; James D. Myers; Robert E. McGrath; Peter Bajcsy; Jason Kastner; Kailash Kotwani; Jong Sung Lee; Luigi Marini; Rob Kooper; Terry McLaren; Yong Liu

The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to locate, use, link, annotate, and discuss data and metadata as they work with existing applications in distributed environments. Tupelo is built using a combination of commonly-used Semantic Web technologies for metadata management, content management technologies for data management, and workflow technologies for management of computation, and can interoperate with other tools using a variety of standard interfaces and a client and desktop API. Tupelos primary function is to facilitate interoperability, providing a Knowledge Space view of distributed, heterogeneous resources such as institutional repositories, relational databases, and semantic web stores. Knowledge Spaces have driven recent work creating e-Science cyberenvironments to serve distributed, active scientific communities. Tupelo-based components deployed in desktop applications, on portals, and in AJAX applications interoperate to allow researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce a complete and coherent research record suitable for distribution and archiving.


adaptive and reflective middleware | 2007

Cyberenvironments: adaptive middleware for scientific cyberinfrastructure

James D. Myers; Robert E. McGrath

The principles of adaptive and reflective software (abstract interfaces, exposed metadata, instrumentation) can be applied to create flexible, scalable scientific CyberInfrastructure and to develop Cyberenvironments to support scientific research. Informed by an understanding of scientific processes as a discourse; we argue that a confluence of ideas from adaptive and reflective software and from traditional scientific information management, and grid/web scalable architectures provide a robust foundation for explicitly and coherently manage data, processes, and models using standardized technologies within Cyberenvironments that present an evolvable, domain-oriented view to the user.


Concurrency and Computation: Practice and Experience | 2011

Semantic middleware for e-Science knowledge spaces

Joe Futrelle; Jeff Gaynor; Joel Plutchak; James D. Myers; Robert E. McGrath; Peter Bajcsy; Jason Kastner; Kailash Kotwani; Jong Sung Lee; Luigi Marini; Rob Kooper; Terry McLaren; Yong Liu

The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to integrate information into a comprehensive research record as they work with existing tools and domain‐specific applications. Knowledge Spaces combine approaches that have demonstrated success in automating parts of this integration activity, including content management systems for domain‐neutral management of data, workflow technologies for management of computation and analysis, and semantic web technologies for extensible, portable, citable management of descriptive information and other metadata. Tupelos ‘Context’ facility and its associated semantic operations both allow existing data representations and tools to be plugged in, and also provide a semantic ‘glue’ of important associative relationships that span the research record, such as provenance, social networks, and annotation. Tupelo has enabled the recent work creating e‐Science cyberenvironments to serve distributed, active scientific communities, allowing researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce an integrated research record suitable for distribution and archiving. Copyright

Collaboration


Dive into the Robert E. McGrath's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Patrick R. Paulson

Pacific Northwest National Laboratory

View shared research outputs
Top Co-Authors

Avatar

Luc Moreau

University of Southampton

View shared research outputs
Top Co-Authors

Avatar

Carl Lagoze

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Steve Downey

University of South Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge