Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Keith Clark.
Ibm Journal of Research and Development | 2014
Alfredo Alba; Gabriel Alatorre; Christian Bolik; Ann Corrao; Thomas Keith Clark; Sandeep Gopisetty; Robert Haas; Ronen I. Kat; Bryan Langston; Nagapramod Mandagere; Dietmar Noll; Sumant Padbidri; Ramani R. Routray; Yang Song; Chung-Hao Tan; Avishay Traeger
The IT industry is experiencing a disruptive trend for which the entire data center infrastructure is becoming software defined and programmable. IT resources are provisioned and optimized continuously according to a declarative and expressive specification of the workload requirements. The software defined environments facilitate agile IT deployment and responsive data center configurations that enable rapid creation and optimization of value-added services for clients. However, this fundamental shift introduces new challenges to existing data center management solutions. In this paper, we focus on the storage aspect of the IT infrastructure and investigate its unique challenges as well as opportunities in the emerging software defined environments. Current state-of-the-art software defined storage (SDS) solutions are discussed, followed by our novel framework to advance the existing SDS solutions. In addition, we study the interactions among SDS, software defined compute (SDC), and software defined networking (SDN) to demonstrate the necessity of a holistic orchestration and to show that joint optimization can significantly improve the effectiveness and efficiency of the overall software defined environments.
Ibm Journal of Research and Development | 2008
Paul L. Bradshaw; Karen W. Brannon; Thomas Keith Clark; Kirby Grant Dahman; Sangeeta T. Doraiswamy; Linda Marie Duyanovich; Bruce Light Hillsberg; Wayne C. Hineman; Michael Allen Kaczmarski; Bernhard Julius Klingenberg; Xiaonan Ma; Robert M. Rees
A dramatic shift is underway in how organizations use computer storage. This shift will have a profound impact on storage system design. The requirement for storage of traditional transactional data is being supplemented by the necessity to store information for long periods. In 2005, a total of 2,700 petabytes of storage was allocated worldwide for information that required long-term retention, and this amount is expected to grow to an estimated 27,200 petabytes by 2010. In this paper, we review the requirements for long-term storage of data and describe an innovative approach for developing a highly scalable and flexible archive storage system using commercial off-the-shelf (COTS) components. Such a system is expected to be capable of preserving data for decades, providing efficient policy-based management of the data, and allowing efficient search and access to data regardless of data content or location.
Ibm Journal of Research and Development | 2008
David D. Chambliss; Prashant Pandey; Tarun Thakur; Aki Fleshler; Thomas Keith Clark; James Alan Ruddy; Kevin D. Gougherty; Matt Kalos; Lyle LeRoy Merithew; John Glenn Thompson; Harry M. Yudenfriend
The very rapid growth of data-intensive computing makes it attractive to perform computations locally, where data is stored. Large storage systems based on standard system technologies with server virtualization capabilities make it feasible to deploy application-specific processing onto the storage system, without jeopardizing the availability of the core storage service or degrading performance. Moreover, price and capacity differences between mainframes and these storage systems make this deployment attractive. We describe the design of a prototype system by which the IBM DS8000™ storage system can host application extensions, called adjuncts, that improve the operation of IBM z/OS® (mainframe) applications. These extensions process large amounts of data in operations such as searching, sorting, and indexing so that the host application need not even access most of the data. The benefits of application extensions result from applying system resources more efficiently. Application processing at the storage system magnifies the total throughput that can be achieved by the host application. Furthermore, by avoiding the transmission of large volumes of data through multiple hardware and software layers, processing often takes a shorter time at a lower cost.
Archive | 2007
David Maxwell Cannon; Thomas Keith Clark; Stephen F. Correl; Toby Lyn Marek; James John Seeger; David M. Wolf; Jason Christopher Young; Michael W. Young
Archive | 2004
Thomas Keith Clark; Ramakrishna Dwivedula; Roger C. Raphael; Robert M. Rees
Archive | 2004
Thomas Keith Clark; Austin F. D'costa; Sudhir Gurunandan Rao; James John Seeger
Archive | 2004
Thomas Keith Clark
Archive | 2004
Thomas Keith Clark; Sudhir Gurunandan Rao
Archive | 2008
Thomas Keith Clark; Jason Christopher Young; Stephen F. Correl; James John Seeger
Archive | 2007
James John Seeger; Thomas Keith Clark; Andreas J. Moran; Jason Christopher Young