Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Raghunath Nambiar is active.

Publication


Featured researches published by Raghunath Nambiar.


international workshop on testing database systems | 2011

The mixed workload CH-benCHmark

Richard L. Cole; Florian Funke; Leo Giakoumakis; Wey Guy; Alfons Kemper; Stefan Krompass; Harumi A. Kuno; Raghunath Nambiar; Thomas Neumann; Meikel Poess; Kai-Uwe Sattler; Michael Seibold; Eric Simon; Florian Waas

While standardized and widely used benchmarks address either operational or real-time Business Intelligence (BI) workloads, the lack of a hybrid benchmark led us to the definition of a new, complex, mixed workload benchmark, called mixed workload CH-benCHmark. This benchmark bridges the gap between the established single-workload suites of TPC-C for OLTP and TPC-H for OLAP, and executes a complex mixed workload: a transactional workload based on the order entry processing of TPC-C and a corresponding TPC-H-equivalent OLAP query suite run in parallel on the same tables in a single database system. As it is derived from these two most widely used TPC benchmarks, the CH-benCHmark produces results highly relevant to both hybrid and classic single-workload systems.


international conference on big data | 2013

A look at challenges and opportunities of Big Data analytics in healthcare

Raghunath Nambiar; Ruchie Bhardwaj; Adhiraaj Sethi; Rajesh Vargheese

Big Data analytics can revolutionize the healthcare industry. It can improve operational efficiencies, help predict and plan responses to disease epidemics, improve the quality of monitoring of clinical trials, and optimize healthcare spending at all levels from patients to hospital systems to governments. This paper provides an overview of Big Data, applicability of it in healthcare, some of the work in progress and a future outlook on how Big Data analytics can improve overall quality in healthcare systems.


Lecture Notes in Computer Science | 2011

Performance Evaluation, Measurement and Characterization of Complex Systems

Raghunath Nambiar; Meikel Poess

Graph Database Management systems (GDBs) are gaining popularity. They are used to analyze huge graph datasets that are naturally appearing in many application areas to model interrelated data. The objective of this paper is to raise a new topic of discussion in the benchmarking community and allow practitioners having a set of basic guidelines for GDB benchmarking. We strongly believe that GDBs will become an important player in the market field of data analysis, and with that, their performance and capabilities will also become important. For this reason, we discuss those aspects that are important from our perspective, i.e. the characteristics of the graphs to be included in the benchmark, the characteristics of the queries that are important in graph analysis applications and the evaluation workbench.


Technology Conference on Performance Evaluation and Benchmarking | 2012

Setting the Direction for Big Data Benchmark Standards

Chaitanya K. Baru; Milind Bhandarkar; Raghunath Nambiar; Meikel Poess; Tilmann Rabl

The Workshop on Big Data Benchmarking (WBDB2012), held on May 8-9, 2012 in San Jose, CA, served as an incubator for several promising approaches to define a big data benchmark standard for industry. Through an open forum for discussions on a number of issues related to big data benchmarking—including definitions of big data terms, benchmark processes and auditing — the attendees were able to extend their own view of big data benchmarking as well as communicate their own ideas, which ultimately led to the formation of small working groups to continue collaborative work in this area. In this paper, we summarize the discussions and outcomes from this first workshop, which was attended by about 60 invitees representing 45 different organizations, including industry and academia. Workshop attendees were selected based on their experience and expertise in the areas of management of big data, database systems, performance benchmarking, and big data applications. There was consensus among participants about both the need and the opportunity for defining benchmarks to capture the end-to-end aspects of big data applications. Following the model of TPC benchmarks, it was felt that big data benchmarks should not only include metrics for performance, but also price/performance, along with a sound foundation for fair comparison through audit mechanisms. Additionally, the benchmarks should consider several costs relevant to big data systems including total cost of acquisition, setup cost, and the total cost of ownership, including energy cost. The second Workshop on Big Data Benchmarking will be held in December 2012 in Pune, India, and the third meeting is being planned for July 2013 in Xi’an, China.


Archive | 2013

Selected Topics in Performance Evaluation and Benchmarking

Raghunath Nambiar; Meikel Poess

The TPC has played, and continues to play, a crucial role in providing the computer industry with relevant standards for total system performance, price-performance and energy efficiency comparisons. Historically known for database-centric standards, the TPC is now developing standards for consolidation using virtualization technologies and multi-source data integration, and exploring new ideas such as Big Data and Big Data Analytics to keep pace with rapidly changing industry demands. This paper gives a high level overview of the current state of the TPC in terms of existing standards, standards under development and future outlook.


international conference on data engineering | 2014

YCSB+T: Benchmarking web-scale transactional databases

Akon Dey; Alan Fekete; Raghunath Nambiar; Uwe Röhm

Database system benchmarks like TPC-C and TPC-E focus on emulating database applications to compare different DBMS implementations. These benchmarks use carefully constructed queries executed within the context of transactions to exercise specific RDBMS features, and measure the throughput achieved. Cloud services benchmark frameworks like YCSB, on the other hand, are designed for performance evaluation of distributed NoSQL key-value stores, early examples of which did not support transactions, and so the benchmarks use single operations that are not inside transactions. Recent implementations of web-scale distributed NoSQL systems like Spanner and Percolator, offer transaction features to cater to new web-scale applications. This has exposed a gap in standard benchmarks. We identify the issues that need to be addressed when evaluating transaction support in NoSQL databases. We describe YCSB+T, an extension of YCSB, that wraps database operations within transactions. In this framework, we include a validation stage to detect and quantify database anomalies resulting from any workload, and we gather metrics that measure transactional overhead. We have designed a specific workload called Closed Economy Workload (CEW), which can run within the YCSB+T framework. We share our experience with using CEW to evaluate some NoSQL systems.


Technology Conference on Performance Evaluation and Benchmarking | 2014

Discussion of BigBench: A Proposed Industry Standard Performance Benchmark for Big Data

Chaitanya K. Baru; Milind Bhandarkar; Carlo Curino; Manuel Danisch; Michael Frank; Bhaskar Gowda; Hans-Arno Jacobsen; Huang Jie; Dileep Kumar; Raghunath Nambiar; Meikel Poess; Francois Raab; Tilmann Rabl; Nishkam Ravi; Kai Sachs; Saptak Sen; Lan Yi; Choonhan Youn

Enterprises perceive a huge opportunity in mining information that can be found in big data. New storage systems and processing paradigms are allowing for ever larger data sets to be collected and analyzed. The high demand for data analytics and rapid development in technologies has led to a sizable ecosystem of big data processing systems. However, the lack of established, standardized benchmarks makes it difficult for users to choose the appropriate systems that suit their requirements. To address this problem, we have developed the BigBench benchmark specification. BigBench is the first end-to-end big data analytics benchmark suite. In this paper, we present the BigBench benchmark and analyze the workload from technical as well as business point of view. We characterize the queries in the workload along different dimensions, according to their functional characteristics, and also analyze their runtime behavior. Finally, we evaluate the suitability and relevance of the workload from the point of view of enterprise applications, and discuss potential extensions to the proposed specification in order to cover typical big data processing use cases.


Archive | 2012

Topics in Performance Evaluation, Measurement and Characterization

Raghunath Nambiar; Meikel Poess

Established in 1988, the Transaction Processing Performance Council (TPC) has had a significant impact on the computing industry’s use of industry-standard benchmarks. These benchmarks are widely adapted by systems and software vendors to illustrate performance competitiveness for their existing products, and to improve and monitor the performance of their products under development. Many buyers use TPC benchmark results as points of comparison when purchasing new computing systems and evaluating new technologies. In this paper, the authors look at the contributions of the Transaction Processing Performance Council in shaping the landscape of industry standard benchmarks – from defining the fundamentals like performance, price for performance, and energy efficiency, to creating standards for independently auditing and reporting various aspects of the systems under test.


Technology Conference on Performance Evaluation and Benchmarking | 2014

Introducing TPCx-HS: The First Industry Standard for Benchmarking Big Data Systems

Raghunath Nambiar; Meikel Poess; Akon Dey; Paul Cao; Tariq Magdon-Ismail; Da Qi Ren; Andrew Bond

The designation Big Data has become a mainstream buzz phrase across many industries as well as research circles. Today many companies are making performance claims that are not easily verifiable and comparable in the absence of a neutral industry benchmark. Instead one of the test suites used to compare performance of Hadoop based Big Data systems is the TeraSort. While it nicely defines the data set and tasks to measure Big Data Hadoop systems it lacks a formal specification and enforcement rules that enable the comparison of results across systems. In this paper we introduce TPCx-HS, the industry’s first industry standard benchmark, designed to stress both hardware and software that is based on Apache HDFS API compatible distributions. TPCx-HS extends the workload defined in TeraSort with formal rules for implementation, execution, metric, result verification, publication and pricing. It can be used to asses a broad range of system topologies and implementation methodologies of Big Data Hadoop systems in a technically rigorous and directly comparable and vendor-neutral manner.


Technology Conference on Performance Evaluation and Benchmarking | 2012

TPC Benchmark Roadmap 2012

Raghunath Nambiar; Meikel Poess; Andrew Masland; H. Reza Taheri; Matthew Emmerton; Forrest Carman; Michael Majdalany

The TPC has played, and continues to play, a crucial role in providing the computer industry with relevant standards for total system performance, price-performance and energy efficiency comparisons. Historically known for database-centric standards, the TPC is now developing standards for consolidation using virtualization technologies and multi-source data integration, and exploring new ideas such as Big Data and Big Data Analytics to keep pace with rapidly changing industry demands. This paper gives a high level overview of the current state of the TPC in terms of existing standards, standards under development and future outlook.

Collaboration


Dive into the Raghunath Nambiar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tilmann Rabl

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew Masland

NEC Corporation of America

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge