Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens Lechtenbörger is active.

Publication


Featured researches published by Jens Lechtenbörger.


data warehousing and olap | 2006

Research in data warehouse modeling and design: dead or alive?

Stefano Rizzi; Alberto Abelló; Jens Lechtenbörger; Juan Trujillo

Multidimensional modeling requires specialized design techniques. Though a lot has been written about how a data warehouse should be designed, there is no consensus on a design method yet. This paper follows from a wide discussion that took place in Dagstuhl, during the Perspectives Workshop Data Warehousing at the Crossroads, and is aimed at outlining some open issues in modeling and design of data warehouses. More precisely, issues regarding conceptual models, logical models, methods for design, interoperability, and design for new architectures and applications are considered.


Information Systems | 2003

Multidimensional normal forms for data warehouse design

Jens Lechtenbörger; Gottfried Vossen

A data warehouse is an integrated and time-varying collection of data derived from operational data and primarily used in strategic decision making by means of OLAP techniques. Although it is generally agreed that warehouse design is a non-trivial problem and that multidimensional data models as well as star or snowflake schemata are relevant in this context, there exist neither methods for deriving such a schema from an operational database nor measures for evaluating a warehouse schema. In this paper, a sequence of multidimensional normal forms is established that allow reasoning about the quality of conceptual data warehouse schemata in a rigorous manner. These normal forms address traditional database design objectives such as faithfulness, completeness, and freedom of redundancies as well as the notion of summarizability, which is specific to multidimensional database schemata.


data and knowledge engineering | 2009

A survey on summarizability issues in multidimensional modeling

Jose-Norberto Mazón; Jens Lechtenbörger; Juan Trujillo

The development of a data warehouse (DW) system is based on a conceptual multidimensional model, which provides a high level of abstraction in accurately and expressively describing real-world situations. Once this model is designed, the corresponding logical representation must be obtained as the basis of the implementation of the DW according to one specific technology. However, even though a good conceptual multidimensional model is designed underneath a DW, there is a semantic gap between this model and its logical representation. In particular, this gap complicates an adequate treatment of summarizability issues, which in turn may lead to erroneous results of data analysis tools. Research addressing this topic has produced only partial solutions, and individual terminology used by different parties hinders further progress. Consequently, based on a unifying vocabulary, this survey sheds light on (i) the weak and strong points of current approaches for modeling complex multidimensional structures that reflect real-world situations in a conceptual multidimensional model and (ii) existing mechanisms to avoid summarizability problems when conceptual multidimensional models are being implemented.


data and knowledge engineering | 2007

Reconciling requirement-driven data warehouses with data sources via multidimensional normal forms

Jose-Norberto Mazón; Juan Trujillo; Jens Lechtenbörger

Successful data warehouse (DW) design needs to be based upon a requirement analysis phase in order to adequately represent the information needs of DW users. Moreover, since the DW integrates the information provided by data sources, it is also crucial to take these sources into account throughout the development process to obtain a consistent reconciliation of data sources and information needs. In this paper, we start by summarizing our approach to specify user requirements for data warehouses and to obtain a conceptual multidimensional model capturing these requirements. Then, we make use of the multidimensional normal forms to define a set of Query/View/Transformation (QVT) relations to assure that the conceptual multidimensional model obtained from user requirements agrees with the available data sources that will populate the DW. Thus, we propose a hybrid approach to develop DWs, i.e., we firstly obtain the conceptual multidimensional model of the DW from user requirements and then we verify and enforce its correctness against data sources by using a set of QVT relations based on multidimensional normal forms. Finally, we provide some snapshots of the CASE tool we have used to implement our QVT relations.


data and knowledge engineering | 2006

Schema versioning in data warehouses: enabling cross-version querying via schema augmentation

Matteo Golfarelli; Jens Lechtenbörger; Stefano Rizzi; Gottfried Vossen

As several mature implementations of data warehousing systems are fully operational, a crucial role in preserving their up-to-dateness is played by the ability to manage the changes that the data warehouse (DW) schema undergoes over time in response to evolving business requirements. In this paper we propose an approach to schema versioning in DWs, where the designer may decide to undertake some actions on old data aimed at increasing the flexibility in formulating cross-version queries, i.e., queries spanning multiple schema versions. First, we introduce a representation of DW schemata as graphs of simple functional dependencies, and discuss its properties. Then, after defining an algebra of schema graph modification operations aimed at creating new schema versions, we discuss how augmented schemata can be introduced to increase flexibility in cross-version querying. Next, we show how a history of versions for DW schemata is managed and discuss the relationship between the temporal horizon spanned by a query and the schema on which it can consistently be formulated.


international conference on conceptual modeling | 2004

Schema Versioning in Data Warehouses

Matteo Golfarelli; Jens Lechtenbörger; Stefano Rizzi; Gottfried Vossen

As several mature implementations of data warehousing systems are fully operational, a crucial role in preserving their up-to-dateness is played by the ability to manage the changes that the data warehouse (DW) schema undergoes over time in response to evolving business requirements. In this paper we propose an approach to schema versioning in DWs, where the designer may decide to undertake some actions on old data aimed at increasing the flexibility in formulating cross-version queries, i.e., queries spanning multiple schema versions. After introducing an algebra of DW schema operations, we define a history of versions for data warehouse schemata and discuss the relationship between the temporal horizon spanned by a query and the schema on which it can consistently be formulated.


ACM Transactions on Database Systems | 2003

On the computation of relational view complements

Jens Lechtenbörger; Gottfried Vossen

Views as a means to describe parts of a given data collection play an important role in many database applications. In dynamic environments where data is updated, not only information provided by views, but also information provided by data sources yet missing from views turns out to be relevant: Previously, this missing information has been characterized in terms of view complements; recently, it has been shown that view complements can be exploited in the context of data warehouses to guarantee desirable warehouse properties such as independence and self-maintainability. As the complete source information is a trivial complement for any view, a natural interest for small or even minimal complements arises. However, the computation of minimal complements is still not very well understood. In this article, it is shown how to compute reasonably small (and in special cases even minimal) complements for a large class of relational views.


symposium on principles of database systems | 2003

The impact of the constant complement approach towards view updating

Jens Lechtenbörger

Views play an important role as a means to structure information with respect to specific users needs. While read access through views is easy to handle, update requests through views are dificult in the sense that they have to be translated into appropriate updates on database relations. In this paper the constant complement translator approach towards view updating proposed by Bancilhon and Spyratos is revisited within the realm of SQL databases, and a novel characterization is established showing that constant complement translators exist precisely if users have a chance to undo all effects of their view updates using further view updates. Based on this characterization view updates with and without constant complement translators are presented. As it turns out that users cannot fully understand updates on views violating the constant complement principle, the application of this principle in the context of external schema design is discussed.


web intelligence | 2011

In-memory Databases in Business Information Systems

Peter Loos; Jens Lechtenbörger; Gottfried Vossen; Alexander Zeier; Jens H. Krüger; Jürgen Müller; Wolfgang Lehner; Donald Kossmann; Benjamin Fabian; Oliver Günther; Robert Winter

In-memory databases are developed to keep the entire data in main memory. Compared to traditional database systems, read access is now much faster since no I/O access to a hard drive is required. In terms of write access, mechanisms are available which provide data persistence and thus secure transactions. In-memory databases have been available for a while and have proven to be suitable for particular use cases. With increasing storage density of DRAM modules, hardware systems capable of storing very large amounts of data have become affordable. In this context the question arises whether in-memory databases are suitable for business information system applications. Hasso Plattner, who developed the HANA in-memory database, is a trailblazer for this approach. He sees a lot of potential for novel concepts concerning the development of business information systems. One example is to conduct transactions and analytics in parallel and on the same database, i.e. a division into operational database systems and data warehouse systems is no longer necessary (Plattner and Zeier 2011). However, there are also voices against this approach. Larry Ellison described the idea of business information systems based on in-memory database as “wacko,” without actually making a case for his statement (cf. Bube 2010). Stonebraker (2011) sees a future for inmemory databases for business information systems but considers the division of OLTP and OLAP applications as reasonable. Therefore, this discussion deals with the question of whether in-memory databases as a basic data management technology can sustainably influence the conception and development of business information system or will remain a niche application. The contributors were invited to address the following research questions (among others): What are the potentials of in-memory databases for business information systems? What are the consequences for OLTP and OLAP applications? Will there be novel application concepts for business information systems? The following researchers accepted the invitation (in alphabetic order): Dr. Benjamin Fabian and Prof. Dr. Oliver Günther, Humboldt-Universität zu Berlin Prof. Dr. Donald Kossmann, ETH Zürich Dr. Jens Lechtenbörger and Prof. Dr. Gottfried Vossen, Münster University Prof. Dr. Wolfgang Lehner, TU Dresden Prof. Dr. Robert Winter, St. Gallen University Dr. Alexander Zeier with Jens Krüger and Jürgen Müller, Potsdam University Lechtenbörger and Vossen discuss the development and the state of the art of inmemory and column-store technology. In their evaluation they stress the potentials of in-memory technology for energy management (cf. Loos et al. 2011) and Cloud Computing. Zeier et al. argue that the main advantage of modern business information systems is their ability to integrate transactional and analytical processing. They see a general trend towards this mixed processing mode (referred to as OLXP). Inmemory technology supports this integration and will render the architectural separation of transactional systems and management information systems unnecessary in the future. The new database technology also greatly facilitates the integration of simulation and optimization techniques into business information systems. Lehner assumes that the revolutionary development of system technology will have a great impact on future structuring, modeling, and programming techniques for business information systems. One consequence will be a general shift from control-flow-driven to data-flowdriven architectures. It is also likely that the requirement for ubiquitously available data will be abandoned and a “needto-know” principle will establish itself in certain areas. Kossman identifies two phases in which in-memory technology will influence business information systems. The first phase is a simplification phase which is caused by a separation of data and application layers of information systems. In a second phase, however, complexity will increase since the optimization of memory hierarchies, such as the interplay between memory and cache, will also have consequences for application developers. Fabian and Günther stress that inmemory databases have already proven


very large data bases | 2001

Monotonic complements for independent data warehouses

Dominique Laurent; Jens Lechtenbörger; Nicolas Spyratos; Gottfried Vossen

Abstract. Views over databases have regained attention in the context of data warehouses, which are seen as materialized views. In this setting, efficient view maintenance is an important issue, for which the notion of self-maintainability has been identified as desirable. In this paper, we extend the concept of self-maintainability to (query and update) independence within a formal framework, where independence with respect to arbitrary given sets of queries and updates over the sources can be guaranteed. To this end we establish an intuitively appealing connection between warehouse independence and view complements. Moreover, we study special kinds of complements, namely monotonic complements, and show how to compute minimal ones in the presence of keys and foreign keys in the underlying databases. Taking advantage of these complements, an algorithmic approach is proposed for the specification of independent warehouses with respect to given sets of queries and updates.

Collaboration


Dive into the Jens Lechtenbörger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Fabian

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge