Shirley Browne
University of Tennessee
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shirley Browne.
ieee international conference on high performance computing data and analytics | 2000
Shirley Browne; Jack J. Dongarra; Nathan Garner; George T. S. Ho; Philip Mucci
The purpose of the PAPI project is to specify a standard application programming interface (API) for accessing hardware performance counters available on most modern microprocessors. These counters exist as a small set of registers that count events, which are occurrences of specific signals and states related to the processor’s function. Monitoring these events facilitates correlation between the structure of source/object code and the efficiency of the mapping of that code to the underlying architecture. This correlation has a variety of uses in performance analysis, including hand tuning, compiler optimization, debugging, benchmarking, monitoring, and performance modeling. In addition, it is hoped that this information will prove useful in the development of new compilation technology as well as in steering architectural development toward alleviating commonly occurring bottlenecks in high performance computing.
symposium on software reusability | 1995
Shirley Browne; Jack J. Dongarra; Stan Green; Keith Moore; Theresa Pepin; Tom Rowan; Reed Wade
A location-independent naming system for network resources has been designed to facilitate organization and description of software components accessible through a virtual distributed repository. This naming system enables easy and efficient searching and retrieval, and it addresses many of the consistency, authenticity, and integrity issues involved with distributed software repositories by providing mechanisms for grouping resources and for authenticity and integrity checking. This paper details the design of the naming system, describes how the system fits into the development of the National HPCC Software Exchange, an virtual software repository that has the goal of providing access to reusable software components for high-performance computing.
international conference on software engineering | 1997
Shirley Browne; James W. Moore
The Reuse Library Interoperability Group (RIG) was formed in 1991 for the purpose of drafting standards enabling the interoperation of software reuse libraries. At that time, prevailing wisdom among many reuse library operators was that each should be a stand-alone operation. Many operators saw a need for only a single library, their own, and most strived to provide the most general possible services to appeal to a broad community of users. The ASSET program, initiated by the Advanced Research Project Agency STARS program, was the first to make the claim that it should properly be one part of a network of interoperating libraries. Shortly thereafter, the RIG was formed, initially as a collaboration between the STARS program and the Air Force RAASP program, but growing within six months to a self-sustaining cooperation among twelve chartering organizations. The RIG has grown to include over twenty members from government, industry, and academic reuse libraries. It has produced a number of technical reports and proposed interoperability standards, some of which are described in this report.
acm international conference on digital libraries | 1998
Shirley Browne; Jack J. Dongarra; Jeff Horner; Paul McMahan; Scott Wells
Over the past several years, network-accessible repositor ie have been developed by various academic, government, and industrial organizations to provide access to software and related resources. Allowing distributed maintenance of thes e repositories while enabling users to access resources from multiple repositories via a single interface has brought ab out the need for interoperation. Concerns about intellectual p roperty rights and export regulations have brought about the ne ed for access control. This paper describes technologies for i nteroperation and access control that have been developed as part of the National High-performance Software Exchange (NHSE) project, as well as their deployment in a freely avail able repository maintainer’s toolkit called Repository in a Box. The approach to interoperation has been to participate in the development of and to implement an IEEE standard data model for software catalog records. The approach to access control has been to extend the data model in the area of intellectual property rights and to implement access con trol mechanisms of varying strengths, ranging from email address verification to X.509 certificates, that enforce sof tware distribution policies specified via the data model. Although they have been developed within the context of software repositories, these technologies should be applicabl e to distributed digital libraries in general.
computational science and engineering | 1995
Shirley Browne; Jack J. Dongarra; Stan Green; Keith Moore; Tom Rowan; Reed Wade; Geoffrey C. Fox; Kenneth A. Hawick; Ken Kennedy; J. Pool; R. Stevens; B. Ogson; T. Disz
Helping the high-performance computing and communications (HPCC) community to share software and information, the National HPCC Software Exchange (NHSE) is an Internet-accessible resource that promotes the exchange of software and information among those involved with HPCC. Now in its infancy, the NHSE will link varied discipline-oriented repositories of software and documents, and encourage Grand Challenge teams and other members of the HPCC community to contribute to these repositories and use them. By acting as a national online library of software that makes widely distributed materials available through one place, the exchange will cut down the amount of time, talent and money spent reinventing the wheel. The target audiences for the NHSE include scientists and engineers in diverse HPCC application fields, computer scientists, users of government and academic supercomputer centers, and industrial users. >
Archive | 1994
Shirley Browne; Stan Green; Keith Moore; Reed Wade; Jack J. Dongarra; Tom Rowan
The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces include the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.
european pvm mpi users group meeting on recent advances in parallel virtual machine and message passing interface | 1998
Shirley Browne
Versatile and easy-to-use parallel debuggers and performance analysis tools are crucial for the development of correct and efficient high performance applications. Although vendors of HPC platforms usually offer debugging and performance tools in some form, it is desirable to have the same interface across multiple platforms so that the user does not have to learn a different tool for each platform. Furthermore, a tool should have an easy-to-use interface that intuitively supports the debugging and performance analysis tasks the user needs to carry out, as well as the parallel programming language and paradigm being used. This paper describes a survey and evaluation of cross-platform debugging and performance analysis tools. In addition, we describe a current project which is developing a cross-platform API for accessing hardware performance counters. This paper necessarily represents a snapshot in time and as such will become out-of-date as new tools and new versions of existing tools are released. Current information and up-to-date evaluations of parallel debugging and performance analysis tools may be found at the Parallel Tools Library web site at http ://www. nhse. org/pt lib/.
Lecture Notes in Computer Science | 1993
Shirley Browne
Although CD-ROM technology provides local mass storage for static information, distributed databases will be crucial for accessing information which is constantly changing, inherently distributed, or accessed infrequently by a given user. As the amount of information available threatens to overwhelm us, multimedia offers a way of increasing the machine to human bandwidth through the use of images, animation, sound, and video, as opposed to purely textual display. The economics of bandwidth sharing argue for use of packet-switching, as opposed to circuit-switching. Motivating the development of wide-area multimedia information systems will be the desire for large-scale collaboration on the Grand Challenge problems, saving lives and reducing costs through the use of medical databases, and consumer demand for commercial applications such as video browsing and home shopping. For example, Project Sequoia 2000, funded by Digital Equipment Corporation at the University of California, involves work on distributed database management of large constantly changing global change datasets, along with network facilities for accessing, visualizing, and analyzing the data [300]. Distributed medical databases which allow a doctor handling an emergency to instantly review a patients medical records remotely will greatly improve the quality of emergency care. Applications of video browsing, ranging from computer dating to long-distance real estate services [281], will provide convenience and cost savings to consumers. Libraries will become important users and providers of multimedia information services. Although libraries have traditionally been repositories of printed information,
2nd International Conference on Vector and Parallel Processing - Systems and Applications, VECPAR 1996 | 1997
Jack J. Dongarra; Shirley Browne; Henri Casanova
This paper describes two projects underway to provide users with access to high performance computing technologies. One effort, the National HPCC Software Exchange, is providing a single point of entry to a distributed collection of domain-specific repositories. These repositories collect, catalog, evaluate, and provide access to software in their specialized domains. The NHSE infrastructure allows these repositories to interoperate with each other and with the top-level NHSE interface. Another effort is the NetSolve project which is a client-server application designed to solve computational science problems over a network. Users may access NetSolve computational servers through C, Fortran, MATLAB, or World Wide Web interfaces. An interesting intersection between the two projects would be the use of the NetSolve system by a domain-specific repository to provide access to software without the need for users to download and install the software on their own systems.
parallel computing | 1996
Jack J. Dongarra; Shirley Browne; Henri Casanova
This paper describes two projects underway to provide users with access to high performance computing technologies. One effort, the National HPCC Software Exchange, is providing a single point of entry to a distributed collection of domain-specific repositories. These repositories collect, catalog, evaluate, and provide access to software in their specialized domains. The NHSE infrastructure allows these repositories to interoperate with each other and with the top-level NHSE interface. Another effort is the NetSolve project which is a client-server application designed to solve computational science problems over a network. Users may access NetSolve computational servers through C, Fortran, MATLAB, or World Wide Web interfaces. An interesting intersection between the two projects would be the use of the NetSolve system by a domain-specific repository to provide access to software without the need for users to download and install the software on their own systems.