Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mihnea Andrei is active.

Publication


Featured researches published by Mihnea Andrei.


very large data bases | 2017

SAP HANA adoption of non-volatile memory

Mihnea Andrei; Christian Lemke; Günter Radestock; Robert Schulze; Carsten Thiel; Rolando Blanco; Akanksha Meghlan; Muhammad Sharique; Sebastian Seifert; Surendra Vishnoi; Daniel Booss; Thomas Peh; Ivan Schreter; Werner Thesing; Mehul Wagle; Thomas Willhalm

Non-Volatile RAM (NVRAM) is a novel class of hardware technology which is an interesting blend of two storage paradigms: byte-addressable DRAM and block-addressable storage (e.g. HDD/SSD). Most of the existing enterprise relational data management systems such as SAP HANA have their internal architecture based on the inherent assumption that memory is volatile and base their persistence on explicit handling of block-oriented storage devices. In this paper, we present the early adoption of Non-Volatile Memory within the SAP HANA Database, from the architectural and technical angles. We discuss our architectural choices, dive deeper into a few challenges of the NVRAM integration and their solutions, and share our experimental results. As we present our solutions for the NVRAM integration, we also give, as a basis, a detailed description of the relevant HANA internals.


international conference on management of data | 2016

Page As You Go: Piecewise Columnar Access In SAP HANA

Reza Sherkat; Colin Florendo; Mihnea Andrei; Anil Kumar Goel; Anisoara Nica; Peter Bumbulis; Ivan Schreter; Günter Radestock; Christian Bensberg; Daniel Booss; Heiko Gerwens

In-memory columnar databases such as SAP HANA achieve extreme performance by means of vector processing over logical units of main memory resident columns. The core in-memory algorithms can be challenged when the working set of an application does not fit into main memory. To deal with memory pressure, most in-memory columnar databases evict candidate columns (or tables) using a set of heuristics gleaned from recent workload. As an alternative approach, we propose to reduce the unit of load and eviction from column to a contiguous portion of the in-memory columnar representation, which we call a page. In this paper, we adapt the core algorithms to be able to operate with partially loaded columns while preserving the performance benefits of vector processing. Our approach has two key advantages. First, partial column loading reduces the mandatory memory footprint for each column, making more memory available for other purposes. Second, partial eviction extends the in-memory lifetime of partially loaded column. We present a new in-memory columnar implementation for our approach, that we term page loadable column. We design a new persistency layout and access algorithms for the encoded data vector of the column, the order-preserving dictionary, and the inverted index. We compare the performance attributes of page loadable columns with those of regular in-memory columns and present a use-case for page loadable columns for cold data in data aging scenarios. Page loadable columns are completely integrated in SAP HANA, and we present extensive experimental results that quantify the performance overhead and the resource consumption when these columns are deployed.


very large data bases | 2017

Statisticum: data statistics management in SAP HANA

Anisoara Nica; Reza Sherkat; Mihnea Andrei; Xun Cheng; Martin Heidel; Christian Bensberg; Heiko Gerwens

We introduce a new concept of leveraging traditional data statistics as dynamic data integrity constraints. These data statistics produce transient database constraints, which are valid as long as they can be proven to be consistent with the current data. We denote this type of data statistics by constraint data statistics, their properties needed for consistency checking by consistency metadata, and their implied integrity constraints by implied data statistics constraints (implied constraints for short). Implied constraints are valid integrity constraints which are powerful query optimization tools employed, just as traditional database constraints, in semantic query transformation (aka query reformulation), partition pruning, runtime optimization, and semi-join reduction, to name a few. To our knowledge, this is the first work introducing this novel and powerful concept of deriving implied integrity constraints from data statistics. We discuss theoretical aspects of the constraint data statistics concept and their integration into query processing. We present the current architecture of data statistics management in SAP HANA and detail how constraint data statistics are designed and integrated into this architecture. As an instantiation of this framework, we consider dynamic partition pruning for data aging scenarios. We discuss our current implementation for constraint data statistics objects in SAP HANA which can be used for dynamic partition pruning. We enumerate their properties and show how consistency checking for implied integrity constraints is supported in the data statistics architecture. Our experimental evaluations on the TPC-H benchmark and a real customer application confirm the effectiveness of the implied integrity constraints; (1) for 59% of TPC-H queries, constraint data statistics utilization results in pruning cold partitions and reducing memory consumption, and (2) we observe up to 3 orders of magnitude speed-up in query processing time, for a real customer running an S/4HANA application.


international conference on management of data | 2009

Ordering, distinctness, aggregation, partitioning and DQP optimization in sybase ASE 15

Mihnea Andrei; Xun Cheng; Sudipto Rai Chowdhuri; Curtis Johnson; Edwin Anthony Seputis

The Sybase ASE RDBMS version 15 was subject to major enhancements, including semantic partitions and a full QP rewrite. The new ASE QP supports horizontal and vertical parallel processing over semantically partitioned tables, and many other modern QP techniques, as cost-based eager aggregation and cost-based join relocation DQP. In the new query optimizer, the ordering, distinctness, aggregation, partitioning, and DQP optimizations were based on a common framework: plan fragment equivalence classes and logical properties. Our main outcomes are a) an eager enforcement policy for ordering, partitioning and DQP location; b) a distinctness and aggregation optimization policy, opportunistically based on the eager ordering enforcement, and which has an optimization-time computational complexity similar to join processing; c) support for the user to force all of the above optimizer decisions, still guaranteeing a valid plan, based on the Abstract Plan technology. We describe the implementation of this solution in the ASE 15 optimizer. Finally, we give our experimental results: the generation of such complex plans comes with a small increase of the optimizers SS size, hence within an acceptable optimization time; at execution, we have obtained performance improvements of orders of magnitude for some queries.


Archive | 2005

System and Methodology for Parallel Query Optimization Using Semantic-Based Partitioning

Sudipto Rai Chowdhuri; Mihnea Andrei


Archive | 2000

Database system with methodology for reusing cost-based optimization decisions

Mihnea Andrei


Archive | 2005

Database system with methodology for generating bushy nested loop join trees

Mihnea Andrei


Archive | 2003

Database system providing methodology for eager and opportunistic property enforcement

Mihnea Andrei


Archive | 2007

System And Methodology For Automatic Tuning Of Database Query Optimizer

Mihnea Andrei; Xun Cheng; Edwin Anthony Seputis; Xiao Ming Zhou


Archive | 2002

Database system providing methodology for property enforcement

Mihnea Andrei

Collaboration


Dive into the Mihnea Andrei's collaboration.

Researchain Logo
Decentralizing Knowledge