Richard J. Friedrich
Hewlett-Packard
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Richard J. Friedrich.
IEEE Internet Computing | 2005
Ratnesh Sharma; Cullen E. Bash; Chandrakant D. Patel; Richard J. Friedrich; Jeffrey S. Chase
Internet-based applications and their resulting multitier distributed architectures have changed the focus of design for large-scale Internet computing. Internet server applications execute in a horizontally scalable topology across hundreds or thousands of commodity servers in Internet data centers. Increasing scale and power density significantly impacts the data centers thermal properties. Effective thermal management is essential to the robustness of mission-critical applications. Internet service architectures can address multisystem resource management as well as thermal management within data centers.
measurement and modeling of computer systems | 2000
Martin F. Arlitt; Ludmila Cherkasova; John Dilley; Richard J. Friedrich; Tai Jin
The continued growth of the World-Wide Web and the emergence of new end-user technologies such as cable modems necessitate the use of proxy caches to reduce latency, network traffic and Web server loads. Current Web proxy caches utilize simple replacement policies to determine which files to retain in the cache. We utilize a trace of client requests to a busy Web proxy in an ISP environment to evaluate the performance of several existing replacement policies and of two new, parameterless replacement policies that we introduce in this paper. Finally, we introduce Virtual Caches, an approach for improving the performance of the cache for multiple metrics simultaneously.
Performance Evaluation | 1994
Tracy Sienknecht; Richard J. Friedrich; Joseph J. Martinka; Peter M. Friedenbach
Abstract This paper explores the implication of distributed data on the design of data storage management systems. We developed file system measurement programs that collected static file system data from commercial UNIX systems. This sampling of 46 systems, 267 file systems, 151 000 directories and 2 300 000 files is greater than any previously reported study. We developed an analysis technique termed collective cumulative distribution function and applied it to collections of file systems. The results provide insight into file system characteristics and the design of hierarchical storage management systems. Important factors influencing performance are: (1) small files dominate in count but large files dominate secondary storage consumption, (2) file I/O is biased toward reads rather than writes, and (3) directory based storage management is problematic.
Open Distributed Processing | 1995
Richard J. Friedrich; Joseph J. Martinka; Tracy Sienknecht; Steve Saunders
Successful deployment of open distributed processing requires integrated performance management facilities. This paper describes measurement and modeling technologies that provide quality of service (QoS) measures and projections for distributed applications. The vital role of performance instrumentation and modeling is applied to the Reference Model for Open Distributed Processing. We discuss an architecture and prototype for an efficient measurement infrastructure for heterogeneous distributed environments. We present an application model useful for application design, deployment and capacity planning. We demonstrate that integrated measurement and modeling yields the QoS measures that guide application deployment and increase management capability.
Archive | 1995
Joseph J. Martinka; Richard J. Friedrich; Tracy Sienknecht
This position paper highlights a daunting challenge facing the deployment of open distributed applications: the performance management component of the transparency functions. Applications operating in an ODP environment require distribution transparencies possessing comprehensive performance management capabilities including monitoring and modeling. The transparency functions are controlled by adaptive management agents that react dynamically to meet client QoS requirements given a current set of server and channel QoS capabilities. This technical challenge must work in a open environment with multiple autonomous administrative domains. For this goal to be realized, the ODP architecture must be enhanced. Distributed performance management of “operational” communications has been neglected in favor of the trendy multi-media “streams” communication in spite of the dominance of the former in current and future applications.
Proceedings of the International DCE Workshop on DCE - The OSF Distributed Computing Environment, Client/Server Model and Beyond | 1993
Joseph J. Martinka; Richard J. Friedrich; Peter M. Friedenbach; Traci F. Sienknecht
This paper summarizes performance results of a systematic evaluation of the Open Software Foundation (OSF) Distributed Computing Environment (DCE) Cell Directory Service (CDS). The CDS is a distributed name database which is used to locate servers and objects within a DCE cell. We designed and built a systematic CDS performance test system and then characterized and projected the performance of important CDS operations with the primary focus on the RPC Name Service Independent (NSI) interface. These results should assist customer application modeling as well as CDS porting and performance tuning by developers using DCE. We believe CDS in its present form has performance tuning opportunities and we provide several recommendations for users of the CDS.
workshop on software and performance | 2007
Jerome Rolia; Ludmila Cherkasova; Richard J. Friedrich
Software Performance Engineering (SPE) methods have been in use for over two decades as an approach to manage the risks of developing systems that fail to satisfy their performance requirements. In general, SPE advocates the use of performance oriented design principles to guide design decisions and predictive performance models to assess the performance impact of design alternatives. SPE methods have been used successfully to identify and overcome system design blunders early in the Information Technology (IT) project lifecycle before the blunders are built into a system and become expensive and time consuming to correct. While the methods have been used successfully in some IT project domains, they are not widely applied in the important domain of Enterprise Application (EA) systems. This experience paper considers the reasons for this and explores the role of SPE as new EA platform and data centre technologies become available.We find that many risks traditionally addressed by SPE have been mitigated by the nature of existing EA platforms, the nature of todays IT projects for EA, and an attention to business process modeling. Furthermore, the design and implementation of future EA systems will see some performance risks reduced even further by new EA and IT system management platforms for Next Generation Data Centres. However, we expect that the nature of EA systems to be built is becoming more complex. As a result some familiar performance risks will re-emerge along with new runtime risks. We believe that SPE methods can help to mitigate such risks and describe research challenges that must be addressed to make this a reality.
Archive | 1999
Martin F. Arlitt; Richard J. Friedrich; Tai Y. Jin
Archive | 1997
Richard J. Friedrich; Joseph J. Matinka; Tracy Sienknecht
Archive | 1997
Richard J. Friedrich; Jerome Rolia