Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Carla Schlatter Ellis is active.

Publication


Featured researches published by Carla Schlatter Ellis.


architectural support for programming languages and operating systems | 2000

Power aware page allocation

Alvin R. Lebeck; Xiaobo Fan; Heng Zeng; Carla Schlatter Ellis

One of the major challenges of post-PC computing is the need to reduce energy consumption, thereby extending the lifetime of the batteries that power these mobile devies. Memory is a particularly important target for efforts to improve energy efficiency. Memory technology is becoming available that offers power management features such as the ability to put individual chips in any one of several different power modes. In this paper we explore the interaction of page placement with static and dynamic hardware policies to exploit these emerging hardware features. In particular, we consider page allocation policies that can be employed by an informed operating system to complement the hardware power management strategies. We perform experiments using two complementary simulation environments: a trace-driven simulator with workload traces that are representative of mobile computing and an execution-driven simulator with a detailed processor/memory model and a more memory-intensive set of benchmarks (SPEC2000). Our results make a compelling case for a cooperative hardware/software approach for exploiting power-aware memory, with down to as little as 45% of the Energy Delay for the best static policy and 1% to 20% of the Energy Delay for a traditional full-power memory.


architectural support for programming languages and operating systems | 2002

ECOSystem: managing energy as a first class operating system resource

Heng Zeng; Carla Schlatter Ellis; Alvin R. Lebeck; Amin Vahdat

Energy consumption has recently been widely recognized as a major challenge of computer systems design. This paper explores how to support energy as a first-class operating system resource. Energy, because of its global system nature, presents challenges beyond those of conventional resource management. To meet these challenges we propose the Currentcy Model that unifies energy accounting over diverse hardware components and enables fair allocation of available energy among applications. Our particular goal is to extend battery lifetime by limiting the average discharge rate and to share this limited resource among competing task according to user preferences. To demonstrate how our framework supports explicit control over the battery resource we implemented ECOSystem, a modified Linux, that incorporates our currentcy model. Experimental results show that ECOSystem accurately accounts for the energy consumed by asynchronous device operation, can achieve a target battery lifetime, and proportionally shares the limited energy resource among competing tasks.


international symposium on low power electronics and design | 2001

Memory controller policies for DRAM power management

Xiaobo Fan; Carla Schlatter Ellis; Alvin R. Lebeck

The increasing importance of energy efficiency has produced a multitude of hardware devices with various power management features. This paper investigates memory controller policies for manipulating DRAM power states in cache-based systems. We develop an analytic model that approximates the idle time of DRAM chips using an exponential distribution, and validate our model against trace-driven simulations. Our results show that, for our benchmarks, the simple policy of immediately transitioning a DRAM chip to a lower power state when it becomes idle is superior to more sophisticated policies that try to predict DRAM chip idle time.


workshop on hot topics in operating systems | 1999

The case for higher-level power management

Carla Schlatter Ellis

Reducing the energy consumed in the use of computing devices is becoming a major design challenge. While the problem obviously must be addressed with improved low level technology, we claim there is potential value in a higher level perspective, as well. In our approach, the needs of applications serve as the driving force for the development of power management functions in the operating system and of a power based API that allows a partnership between applications and the system in setting energy policy. The development of a PalmPilot application is used as an illustration. We advocate that reducing energy consumption should be raised to first class status among performance goals when software is being designed. In support of this objective, new programming models, measurement tools, and system support mechanisms must be developed. These needs motivate our Milly Watt Project.


international conference on data engineering | 2006

A Sampling-Based Approach to Optimizing Top-k Queries in Sensor Networks

Adam Silberstein; Rebecca Braynard; Carla Schlatter Ellis; Kamesh Munagala; Jun Yang

Wireless sensor networks generate a vast amount of data. This data, however, must be sparingly extracted to conserve energy, usually the most precious resource in battery-powered sensors. When approximation is acceptable, a model-driven approach to query processing is effective in saving energy by avoiding contacting nodes whose values can be predicted or are unlikely to be in the result set. To optimize queries such as top-k, however, reasoning directly with models of joint probability distributions can be prohibitively expensive. Instead of using models explicitly, we propose to use samples of past sensor readings. Not only are such samples simple to maintain, but they are also computationally efficient to use in query optimization. With these samples, we can formulate the problem of optimizing approximate top-k queries under an energy constraint as a linear program. We demonstrate the power and flexibility of our sampling-based approach by developing a series of topk query planning algorithms with linear programming, which are capable of efficiently producing plans with better performance and novel features. We show that our approach is both theoretically sound and practically effective on simulated and real-world datasets.


international conference on computer communications | 2000

Differentiated multimedia Web services using quality aware transcoding

Surendar Chandra; Carla Schlatter Ellis; Amin Vahdat

The ability of a Web service to provide low-latency access to its contents is constrained by available network bandwidth. It is important for the service to manage available bandwidth wisely. While providing differentiated quality of service (QoS) is typically enforced through network mechanisms, in this paper we introduce a robust mechanism for managing network resources at the application level. We use transcoding to allow Web servers to customize the size of objects constituting a Web page, and hence the bandwidth consumed by that page, by dynamically varying the size of multimedia objects on a per-client basis. We leverage earlier work on characterizing quality versus size tradeoffs in transcoding JPEG images to dynamically determine the quality and size of the object to transmit. We evaluate the performance benefits of incorporating this information in a series of bandwidth management policies. We develop metrics to measure the performance of our system. We use realistic workloads and access scenarios to drive our system. The principal contribution of this work is the demonstration that it is possible to use informed transcoding techniques to provide differentiated service and to dynamically allocate available bandwidth among different client classes, while delivering a high degree of information content (quality factor) for all clients.


PACS'03 Proceedings of the Third international conference on Power - Aware Computer Systems | 2003

The synergy between power-aware memory systems and processor voltage scaling

Xiaobo Fan; Carla Schlatter Ellis; Alvin R. Lebeck

Energy consumption is becoming a limiting factor in the development of computer systems for a range of application domains. Since processor performance comes with a high power cost, there is increased interest in scaling the CPU voltage and clock frequency. Dynamic Voltage Scaling (DVS) is the technique for exploiting hardware capabilities to select an appropriate clock rate and voltage to meet application requirements at the lowest energy cost. Unfortunately, the power and performance contributions of other system components, in particular memory, complicate some of the simple assumptions upon which most DVS algorithms are based. We show that there is a positive synergistic effect between DVS and power-aware memories that can transition into lower power states. This combination can offer greater energy savings than either technique alone (89% vs. 39% and 54%). We argue that memory-based criteria-information that is available in commonly provided hardware counters-are important factors for effective speed-setting in DVS algorithms and we develop a technique to estimate overall energy consumption based on them.


IEEE Journal on Selected Areas in Communications | 2000

Application-level differentiated multimedia Web services using quality aware transcoding

Surendar Chandra; Carla Schlatter Ellis; Amin Vahdat

The ability of a Web service to provide low-latency access to its content is constrained by available network bandwidth. While providing differentiated quality of service (QoS) is typically enforced through network mechanisms, in this paper we introduce a robust mechanism for managing network resources using application-specific characteristics of Web services. We use transcoding to allow Web servers to customize the size of objects constituting a Web page, and hence the bandwidth consumed by that page, by dynamically varying the size of multimedia objects on a per-client basis. We leverage our earlier work on characterizing quality versus size tradeoffs in transcoding JPEG images to supply more information for determining the quality and size of the object to transmit. We evaluate the performance benefits of incorporating this information in a series of bandwidth management policies using realistic workloads and access scenarios to drive our system. The principal contribution of this paper is the demonstration that it is possible to use informed transcoding techniques to provide differentiated service and to dynamically allocate available bandwidth among different client classes, while delivering good quality of information content for all clients. We also show that it is possible to customize multimedia objects to the highly variable network conditions experienced by mobile clients in order to provide acceptable quality and latency depending on the networks used in accessing the service. We show that policies that aggressively transcode the larger images can produce images with quality factor values that closely follow the untranscoded base case while still saving as much as 150 kB. A transcoding policy that has knowledge of the characteristics of the link to the client can avoid as many as 40% of (unnecessary) transcodings.


modeling analysis and simulation of wireless and mobile systems | 2000

Energy estimation tools for the Palm

Todd L. Cignetti; Kirill Komarov; Carla Schlatter Ellis

Reducing the energy consumed in the use of mobile and wireless devices is becoming a major design challenge. While the problem obviously must be addressed with improved low-level technology, we have advocated also considering a higher-level view in which energy management becomes an explicit design goal of the software developer who can be more aware of the needs of applications. In support of this objective, new programming models, measurement tools, and simulation environments must be developed to provide the developer with feedback on the energy implications of various design decisions. In this paper, we describe an energy model and an execution-driven simulator incorporating this model for the PalmOS#8482; family of devices.


IEEE Transactions on Parallel and Distributed Systems | 1990

Prefetching in file systems for MIMD multiprocessors

David Kotz; Carla Schlatter Ellis

The question of whether prefetching blocks on the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions, is considered. Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that (1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, (2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O (input/output) operation, and (3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study). The authors explore why it is not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in the environment. >

Collaboration


Dive into the Carla Schlatter Ellis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard P. LaRowe Jr.

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Angela Dalton

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge