Suzanne Rivoire
Sonoma State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Suzanne Rivoire.
international conference on management of data | 2007
Suzanne Rivoire; Mehul A. Shah; Parthasarathy Ranganathan; Christos Kozyrakis
The energy efficiency of computer systems is an important concern in a variety of contexts. In data centers, reducing energy use improves operating cost, scalability, reliability, and other factors. For mobile devices, energy consumption directly affects functionality and usability. We propose and motivate JouleSort, an external sort benchmark, for evaluating the energy efficiency of a wide range of computer systems from clusters to handhelds. We list the criteria, challenges, and pitfalls from our experience in creating a fair energy-efficiency benchmark. Using a commercial sort, we demonstrate a JouleSort system that is over 3.5x as energy-efficient as last years estimated winner. This system is quite different from those currently used in data centers. It consists of a commodity mobile CPU and 13 laptop drives connected by server-style I/O interfaces.
IEEE Computer | 2007
Suzanne Rivoire; Mehul A. Shah; P. Ranganatban; Christos Kozyrakis; J. Meza
Power consumption and energy efficiency are important factors in the initial design and day-to-day management of computer systems. Researchers and system designers need benchmarks that characterize energy efficiency to evaluate systems and identify promising new technologies. To predict the effects of new designs and configurations, they also need accurate methods of modeling power consumption.
international conference on parallel processing | 2006
Suzanne Rivoire; Rebecca Schultz; Tomofumi Okuda; Christos Kozyrakis
Multi-lane vector processors achieve excellent computational throughput for programs with high data-level parallelism (DLP). However, application phases without significant DLP are unable to fully utilize the datapaths in the vector lanes. In this paper, we propose vector lane threading (VLT), an architectural enhancement that allows idle vector lanes to run short-vector or scalar threads. VLT-enhanced vector hardware can exploit both data-level and thread-level parallelism to achieve higher performance. We investigate implementation alternatives for VLT, focusing mostly on the instruction issue bandwidth requirements. We demonstrate that VLTs area overhead is small. For applications with short vectors, VLT leads to additional speedup of IA to 23 over the base vector design. For scalar threads, VLT outperforms a 2-way CMP design by a factor of two. Overall, VLT allows vector processors to reach high computational throughput for a wider range of parallel programs and become a competitive alternative to CMP systems
international symposium on computer architecture | 2010
Laura Keys; Suzanne Rivoire; John D. Davis
This paper conducts a survey of several small clusters of machines in search of the most energy-efficient data center building block targeting data-intensive computing. We first evaluate the performance and power of single machines from the embedded, mobile, desktop, and server spaces. From this group, we narrow our choices to three system types. We build five-node homogeneous clusters of each type and run Dryad, a distributed execution engine, with a collection of data-intensive workloads to measure the energy consumption per task on each cluster. For this collection of data-intensive workloads, our high-end mobile-class system was, on average, 80% more energy-efficient than a cluster with embedded processors and at least 300% more energy-efficient than a cluster with low-power server processors.
technical symposium on computer science education | 2010
Suzanne Rivoire
The technique of scaling hardware performance through increasing the number of cores on a chip requires programmers to learn to write parallel code that can exploit this hardware. In order to expose students to a variety of multicore programming models, our university offered a breadth-first introduction to multicore and manycore programming for upper-level undergraduates. Our students gained programming experience with three different parallel programming models, two of which are less than five years old and targeted specifically to multicore and manycore computing. Assessments throughout the semester showed that the course gave students a broad base of experience from which they will be able to understand ongoing developments in the field.
ieee international conference on high performance computing data and analytics | 2015
Tom Scogland; Jonathan J. Azose; D. Rohr; Suzanne Rivoire; Natalie J. Bates; Daniel Hackenberg
The last decade has seen power consumption move from an afterthought to the foremost design constraint of new supercomputers. Measuring the power of a supercomputer can be a daunting proposition, and as a result, many published measurements are extrapolated. This paper explores the validity of these extrapolations in the context of inter-node power variability and power variations over time within a run. We characterize power variability across nodes in systems at eight supercomputer centers across the globe. This characterization shows that the current requirement for measurements submitted to the Green500 and others is insufficient, allowing variations of up to 20% due to measurement timing and a further 10--15% due to insufficient sample sizes. This paper proposes new power and energy measurement requirements for supercomputers, some of which have been accepted for use by the Green500 and Top500, to ensure consistent accuracy.
Advances in Computers | 2009
Parthasarathy Ranganathan; Suzanne Rivoire; Justin Douglas Moore
Abstract Over the past decade, however, power and energy have begun to severely constrain component, system, and data center designs. When a data center reaches its maximum provisioned power, it must be replaced or augmented at great expense. In desktops, power consumption and heat contribute to electricity costs as well as noise. Better equipment design and better energy management policies are needed to address these concerns. This chapter will detail current efforts in energy‐efficiency metrics and in power and thermal modeling, delving into specific case studies for each. Various benchmarks are explained and their effectiveness at measuring power requirements is discussed.
international workshop on energy efficient supercomputing | 2014
Jacob Combs; Jolie Nazor; Rachelle Thysell; Fabian Santiago; Matthew Hardwick; Lowell Olson; Suzanne Rivoire; Chung-Hsing Hsu; Stephen W. Poole
Workload-aware power management and scheduling techniques have the potential to save energy while minimizing negative impact on performance. The effectiveness of these techniques depends on the stability of a workloads power consumption pattern across different input data, resource allocations (e.g. number of cores), and hardware platforms. In this paper, we show that the power consumption behavior of HPC workloads can be accurately captured by concise signatures built from their power traces. We validate this approach using 255 traces collected from 13 high-performance computing workloads on 4 different hardware platforms. First, we use both feature-based and time-series-based distance metrics to cluster our traces, and we quantitatively show that feature-based clusterings segregate traces by workload just as effectively as the more compute- and space-intensive time-series-based clusterings. Second, we demonstrate that unlabeled traces can be classified by workload with over 85% accuracy, based only on these concise statistical signatures.
international conference on parallel processing | 2014
John D. Davis; Suzanne Rivoire; Moises Goldszmidt
Star-Cap is a high-fidelity, real-time power management system for clusters. Star-Caps cluster power management module uses software-only power models to implement a proactive power capping mechanism with the ability to distribute a clusters power budget non-uniformly across nodes. We evaluated Star-Cap on a variety of MapReduce-style workloads running on a low-power cluster. Depending on application and platform, Star-Cap improves cluster throughput by 14-43% compared to using a uniform power cap distribution policy with a purely reactive power capping mechanism, even one based on real physical measurements. Furthermore, Star-Cap maintains cluster throughput while reducing overall cluster power budget by 14% with no additional capital or operating cost. Star-Caps proactive power capping mechanism also improves response time to power cap violations by a factor of 2 to 10. Star-Caps overhead for collecting, computing, and reporting data is less than 1% CPU utilization.
IEEE Computer Architecture Letters | 2012
John D. Davis; Suzanne Rivoire; Moises Goldszmidt; Ehsan K. Ardestani
Studying the energy efficiency of large-scale computer systems requires models of the relationship between resource utilization and power consumption. Prior work on power modeling assumes that models built for a single node will scale to larger groups of machines. However, we find that inter-node variability in homogeneous clusters leads to substantially different models for different nodes. Moreover, ignoring this variability will result in significant prediction errors when scaled to the cluster level. We report on inter-node variation for four homogeneous five-node clusters using embedded, laptop, desktop, and server processors. The variation is manifested quantitatively in the prediction error and qualitatively on the resource utilization variables (features) that are deemed relevant for the models. These results demonstrate the need to sample multiple machines in order to produce accurate cluster models.