Toni Cortes
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Toni Cortes.
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
We propose a new data structure called Scalable Distributed Log Structured Array (LSA*) that generalizes the Log Structured Array (LSA) structure. LSA is intended to provide high-availability at a disk-storage server without the small-write penalty of RAID-schemes. LSA is at the heart of RAMAC 1 Virtual Array (RVA), an IBM product line. LSA* is intended for a Storage Area Network (SAN) of RVAs or of similar TB storage devices. It is also a general-purpose data structure. An LSA* file can scale to sizes orders of magnitude larger than an LSA file can. It can also support parallel scans, thus reducing time to scan a file in its entirety as often required for decision support. Finally, it can allow for high-availability even in the presence of entirely unavailable storage nodes. This feature enhances the high-availability beyond that of the LSA capabilities.
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Applications executing on massively parallel processors (MPPs) often require a high aggregate bandwidth of lowlatency I/O to secondary storage. In many current MPPs, this requirement has been met by supplying internal parallel I/O subsystems that serve as staging areas for data. Typically, the parallel I/O subsystem is composed of I/O nodes that are linked to the same interconnection network that connects the compute nodes. The I/O nodes each manage their own set of disks. The option of increasing the number of I/O nodes together with the number of compute nodes and with the interconnection network allows for a balanced architecture. We explore the issues motivating the selection of this architecture for secondary storage in MPPs. We survey the designs of some recent and current parallel I/O subsystems, and discuss issues of system configuration, reliability, and file systems.
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
?>Parallel I/O ?> is the support of a single parallel application run on many nodes; application data is distributed among the nodes, and is read or written to a single logical file, itself spread across nodes and disks. Parallel I/O is a mapping problem from the data layout in node memory to the file layout on disks. Since the mapping can be quite complicated and involve significant data movement, optimizing the mapping is critical for performance. We discuss our general model of the problem, describe four Collective Buffering algorithms we designed, and report experiments testing their performance on an Intel Paragon and an IBM SP2 both housed at NASA Ames Research Center. Our experiments show improvements of up to two order of magnitude over standard techniques and the spotential to deliver peak performance with minimal hardware support.
IOPADS | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin
Archive | 2002
Rajkumar Buyya; Toni Cortes; Hai Jin