Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Meikel Poess is active.

Publication


Featured researches published by Meikel Poess.


very large data bases | 2008

Energy cost, the key challenge of today's data centers: a power consumption analysis of TPC-C results

Meikel Poess; Raghunath Othayoth Nambiar

Historically, performance and price-performance of computer systems have been the key purchasing arguments for customers. With rising energy costs and increasing power use due to the ever-growing demand for computing power (servers, storage, networks), electricity bills have become a significant expense for todays data centers. In the very near future, energy efficiency is expected to be one of the key purchasing arguments. Some performance organizations, such as SPEC, have developed power benchmarks for single servers (SPECpower_ssj2008), but so far, no benchmark exists that measures the power consumption of transaction processing systems. In this paper, we develop a power consumption model based on data readily available in the TPC-C full disclosure report of published benchmarks. We verify our model with measurements taken from three fully scaled and optimized TPC-C configurations including client (middle-tier) systems, database server, and storage subsystem. By applying this model to a subset of 7 years of TPC-C results, we identify the most power-intensive components and demonstrate the existing power consumption trends over time. Assuming similar trends in the future, the hardware enhancements alone will not be able to satisfy the demand for energy efficiency. In its outlook, this paper looks at potential hardware and software enhancements to meet the energy efficiency demands of future systems. Realizing the importance of energy efficiency, the Transaction Processing Performance Council (TPC) has formed a working group to look into adding energy efficiency metrics to all its benchmarks. This paper is expected to complement this initiative.


very large data bases | 2003

Data compression in Oracle

Meikel Poess; Dmitry Potapov

The Oracle RDBMS recently introduced an innovative compression technique for reducing the size of relational tables. By using a compression algorithm specifically designed for relational data, Oracle is able to compress data much more effectively than standard compression techniques. More significantly, unlike other compression techniques, Oracle incurs virtually no performance penalty for SQL queries accessing compressed tables. In fact, Oracles compression may provide performance gains for queries accessing large amounts of data, as well as for certain data management operations like backup and recovery. Oracles compression algorithm is particularly well-suited for data warehouses: environments, which contains large volumes of historical data, with heavy query workloads. Compression can enable a data warehouse to store several times more raw data without increasing the total disk storage or impacting query performance.


international workshop on testing database systems | 2011

The mixed workload CH-benCHmark

Richard L. Cole; Florian Funke; Leo Giakoumakis; Wey Guy; Alfons Kemper; Stefan Krompass; Harumi A. Kuno; Raghunath Nambiar; Thomas Neumann; Meikel Poess; Kai-Uwe Sattler; Michael Seibold; Eric Simon; Florian Waas

While standardized and widely used benchmarks address either operational or real-time Business Intelligence (BI) workloads, the lack of a hybrid benchmark led us to the definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark. This benchmark bridges the gap between the established single-workload suites of TPC-C for OLTP and TPC-H for OLAP, and executes a complex mixed workload: a transactional workload based on the order entry processing of TPC-C and a corresponding TPC-H-equivalent OLAP query suite run in parallel on the same tables in a single database system. As it is derived from these two most widely used TPC benchmarks, the CH-benCHmark produces results highly relevant to both hybrid and classic single-workload systems.


international conference on management of data | 2002

TPC-DS, taking decision support benchmarking to the next level

Meikel Poess; Bryan Frederick Smith; Lubor J. Kollar; Paul Larson

TPC-DS is a new decision support benchmark currently under development by the Transaction Processing Performance Council (TPC). This paper provides a brief overview of the new benchmark. The benchmark models the decision support functions of a retail product supplier, including data loading, multiple types of queries and data maintenance. The database consists of multiple snowflake schemas with shared dimension tables; data is skewed; and the query set is large. Overall, the benchmark is considerably more realistic than previous decision support benchmarks.


workshop on software and performance | 2004

MUDD: a multi-dimensional data generator

John M. Stephens; Meikel Poess

Todays business intelligence systems consist of hundreds of processors with disk subsystems able to handle multiple Giga-bytes of IO-bandwidth. These systems usually contain terabytes of data. Evaluating database system performance of such systems often requires generating synthetic data with well defined statistical properties. To simulate different scenarios, it is important to vary statistical properties including row counts of tables. Foremost, in order to analyze large scale systems, data generators need to be able to produce hundreds of terabytes of data in a timely fashion. In this paper we present MUDD, a multi-dimensional data generator. Originally designed for TPC-DS, a decision support benchmark being developed by the TPC, MUDD is able to generate up to 100 Terabyte of flat file data in hours, utilizing modern multi processor architectures, including clusters. Its novel design separates data generation algorithms from data distribution definitions, enabling users to adjust their workload to individual needs and different scenarios.


energy efficient computing and networking | 2010

Energy benchmarks: a detailed analysis

Meikel Poess; Raghunath Othayoth Nambiar; Kushagra Vaid; John M. Stephens; Karl Huppler; Evan Haines

In light of an increase in energy cost and energy consciousness industry standard organizations such as Transaction Processing Performance Council (TPC), Standard Performance Evaluation Corporation (SPEC) and Storage Performance Council (SPC) as well as the U.S. Environmental Protection Agency have developed tests to measure energy consumption of computer systems. Although all of these consortia aim at standardizing power consumption measurement using benchmarks, ultimately aiming to reduce overall power consumption, and to aid in making purchase decisions, their methodologies differ slightly. For instance, some organizations developed specialized benchmarks while others added energy metrics to existing benchmarks. In this paper we give a comprehensive overview of the currently available energy benchmarks followed by an in depth analysis of their commonalities and differences.


Lecture Notes in Computer Science | 2011

Performance Evaluation, Measurement and Characterization of Complex Systems

Raghunath Nambiar; Meikel Poess

Graph Database Management systems (GDBs) are gaining popularity. They are used to analyze huge graph datasets that are naturally appearing in many application areas to model interrelated data. The objective of this paper is to raise a new topic of discussion in the benchmarking community and allow practitioners having a set of basic guidelines for GDB benchmarking. We strongly believe that GDBs will become an important player in the market field of data analysis, and with that, their performance and capabilities will also become important. For this reason, we discuss those aspects that are important from our perspective, i.e. the characteristics of the graphs to be included in the benchmark, the characteristics of the queries that are important in graph analysis applications and the evaluation workbench.


very large data bases | 2004

Generating thousand benchmark queries in seconds

Meikel Poess; John M. Stephens

The combination of an exponential growth in the amount of data managed by a typical business intelligence system and the increased competitiveness of a global economy has propelled decision support systems (DSS) from the role of exploratory tools employed by a few visionary companies to become a core requirement for a competitive enterprise. That same maturation has often resulted in a selection process that requires an ever more critical system evaluation and selection to be completed in an increasingly short period of time. While there have been some advances in the generation of data sets for system evaluation (see [3]), the quantification of query performance has often relied on models and methodologies that were developed for systems that were more simplistic, less dynamic, and less central to a successful business. In this paper we present QGEN, a flexible, high-level query generator optimized for decision support system evaluation. QGEN is able to generate arbitrary query sets, which conform to a selected statistical profile without requiring that the queries be statically defined or disclosed prior to testing. Its novel design links query syntax with abstracted data distributions, enabling users to parameterize their query workload to match an emerging access pattern or data set modification. This results in query sets that retain comparability for system comparisons while reflecting the inherent dynamism of operational systems, and which provide a broad range of syntactic and semantic coverage, while remaining focused on appropriate commonalities within a particular evaluation process or business segment.


Technology Conference on Performance Evaluation and Benchmarking | 2012

Setting the Direction for Big Data Benchmark Standards

Chaitanya K. Baru; Milind Bhandarkar; Raghunath Nambiar; Meikel Poess; Tilmann Rabl

The Workshop on Big Data Benchmarking (WBDB2012), held on May 8-9, 2012 in San Jose, CA, served as an incubator for several promising approaches to define a big data benchmark standard for industry. Through an open forum for discussions on a number of issues related to big data benchmarking—including definitions of big data terms, benchmark processes and auditing — the attendees were able to extend their own view of big data benchmarking as well as communicate their own ideas, which ultimately led to the formation of small working groups to continue collaborative work in this area. In this paper, we summarize the discussions and outcomes from this first workshop, which was attended by about 60 invitees representing 45 different organizations, including industry and academia. Workshop attendees were selected based on their experience and expertise in the areas of management of big data, database systems, performance benchmarking, and big data applications. There was consensus among participants about both the need and the opportunity for defining benchmarks to capture the end-to-end aspects of big data applications. Following the model of TPC benchmarks, it was felt that big data benchmarks should not only include metrics for performance, but also price/performance, along with a sound foundation for fair comparison through audit mechanisms. Additionally, the benchmarks should consider several costs relevant to big data systems including total cost of acquisition, setup cost, and the total cost of ownership, including energy cost. The second Workshop on Big Data Benchmarking will be held in December 2012 in Pune, India, and the third meeting is being planned for July 2013 in Xi’an, China.


Archive | 2013

Selected Topics in Performance Evaluation and Benchmarking

Raghunath Nambiar; Meikel Poess

The TPC has played, and continues to play, a crucial role in providing the computer industry with relevant standards for total system performance, price-performance and energy efficiency comparisons. Historically known for database-centric standards, the TPC is now developing standards for consolidation using virtualization technologies and multi-source data integration, and exploring new ideas such as Big Data and Big Data Analytics to keep pace with rapidly changing industry demands. This paper gives a high level overview of the current state of the TPC in terms of existing standards, standards under development and future outlook.

Collaboration


Dive into the Meikel Poess's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tilmann Rabl

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anne Koziolek

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge