Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robert Wrembel is active.

Publication


Featured researches published by Robert Wrembel.


acm symposium on applied computing | 2004

Creation and management of versions in multiversion data warehouse

Bartosz Bȩbel; Johann Eder; Christian Koncilia; Tadeusz Morzy; Robert Wrembel

A data warehouse (DW) provides an information for analytical processing, decision making, and data mining tools. On the one hand, the structure and content of a data warehouse reflects a real world, i.e. data stored in a DW come from real production systems. On the other hand, a DW and its tools may be used for predicting trends and simulating a virtual business scenarios. This activity is often called the what-if analysis. Traditional DW systems have static structure of their schemas and relationships between data, and therefore they are not able to support any dynamics in their structure and content. For these purposes, multiversion data warehouses seem to be very promising. In this paper we present a concept and an ongoing implementation of a multiversion data warehouse that is capable of handling changes in the structure of its schema as well as simulating alternative business scenarios.


Archive | 2006

Data Warehouses And Olap: Concepts, Architectures And Solutions

Robert Wrembel; Christian Koncilia

Covering a wide range of technical, technological, and research issues, this text provides theoretical frameworks, presents challenges and their possible solutions, and examines the latest empirical research findings in the area of data warehousing.


data warehousing and olap | 2004

On querying versions of multiversion data warehouse

Tadeusz Morzy; Robert Wrembel

A data warehouse (DW) is fed with data that come from external data sources that are production systems. External data sources, which are usually autonomous, often change not only their content but also their structure. The evolution of external data sources has to be reflected in a DW, that uses the sources. Traditional DW systems offer a limited support for handling dynamics in their structure and content. A promising approach to handling changes in DW structure and content is based on a multiversion data warehouse. In such a DW, each DW version describes a schema and data at certain period of time or a given business scenario, created for simulation purposes. In order to appropriately analyze multiversion data, an extension to a traditional SQL language is required. In this paper we propose an approach to querying a multiversion DW. To this end, we extended a SQL language and built a multiversion query language interface with functionality that allows: (1) expressing queries that address several DW versions and (2) presenting their results annotated with metadata information.


international conference on move to meaningful internet systems | 2007

Metadata management in a multiversion data warehouse

Robert Wrembel; Bartosz Bębel

A data warehouse (DW) is supplied with data that come from external data sources (EDSs) that are production systems. EDSs, which are usually autonomous, often change not only their contents but also their structures. The evolution of external data sources has to be reflected in a DW that uses the sources. Traditional DW systems offer a limited support for handling dynamics in their structures and contents. A promising approach to this problem is based on a multiversion data warehouse (MVDW). In such a DW, every DW version includes a schema version and data consistent with its schema version. A DW version may represent a real state at certain period of time, after the evolution of EDSs or changed user requirements or the evolution of the real world. A DW version may also represent a given business scenario that is created for simulation purposes. In order to appropriately synchronize a MVDW content and structure with EDSs as well as to analyze multiversion data, a MVDW has to manage metadata. Metadata describing a MVDW are much more complex than in traditional DWs. In our approach and prototype MVDW system, a metaschema provides data structures that support: (1) monitoring EDSs with respect to content and structural changes, (2) automatic generation of processes monitoring EDSs, (3) applying the discovered EDS changes to a selected, DW version, (4) describing the structure of every DW version, (5) querying multiple DW versions of interest at the same time, (6) presenting and comparing multiversion query results.


Information Systems | 2009

RLH: Bitmap compression technique based on run-length and Huffman encoding

Michal Stabno; Robert Wrembel

In this paper we propose a technique of compressing bitmap indexes for application in data warehouses. This technique, called run-length Huffman (RLH), is based on run-length encoding and on Huffman encoding. Additionally, we present a variant of RLH, called RLH-N. In RLH-N a bitmap is divided into N-bit words that are compressed by RLH. RLH and RLH-N were implemented and experimentally compared to the well-known word aligned hybrid (WAH) bitmap compression technique that has been reported to provide the shortest query execution time. The experiments discussed in this paper show that: (1) RLH-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps, regardless of the cardinality of an indexed attribute, (2) RLH-N-compressed bitmaps are smaller than corresponding WAH-compressed bitmaps for certain range of cardinalities of an indexed attribute, (3) RLH and RLH-N-compressed bitmaps offer shorter query response times than WAH-compressed bitmaps, for certain range of cardinalities of an indexed attribute, and (4) RLH-N assures shorter update time of compressed bitmaps than RLH.


database and expert systems applications | 2010

GPU-WAH: applying GPUs to compressing bitmap indexes with word aligned hybrid

Witold Andrzejewski; Robert Wrembel

Bitmap indexes are one of the basic data structures applied to query optimization in data warehouses. The size of a bitmap index strongly depends on the domain of an indexed attribute, and for wide domains it is too large to be efficiently processed. For this reason, various techniques of compressing bitmap indexes have been proposed. Typically, compressed indexes have to be decompressed before being used by a query optimizer that incurs a CPU overhead and deteriorates the performance of a system. For this reason, we propose to use additional processing power of the GPUs of modern graphics cards for compressing and decompressing bitmap indexes. In this paper we present a modification of the well known WAH compression technique so that it can be executed and parallelized on modern GPUs.


International Journal of Data Warehousing and Mining | 2009

A Survey of Managing the Evolution of Data Warehouses

Robert Wrembel

Methods of designing a data warehouse (DW) usually assume that its structure is static. In practice, however, a DW structure changes among others as the result of the evolution of external data sources and changes of the real world represented in a DW. The most advanced research approaches to this problem are based on temporal extensions and versioning techniques. This article surveys challenges in designing, building, and managing data warehouses whose structure and content evolve in time. The survey is based on the so-called Multiversion Data Warehouse (MVDW). In details, this article presents the following issues: the concept of the MVDW, a language for querying the MVDW, a framework for detecting changes in data sources, a structure for sharing data in the MVDW, index structures for indexing data in the MVDW.


International Journal of Data Warehousing and Mining | 2013

On-Demand ELT Architecture for Right-Time BI: Extending the Vision

Florian Waas; Robert Wrembel; Tobias Freudenreich; Maik Thiele; Christian Koncilia; Pedro Furtado

In a typical BI infrastructure, data, extracted from operational data sources, is transformed, cleansed, and loaded into a data warehouse by a periodic ETL process, typically executed on a nightly basis, i.e., a full days worth of data is processed and loaded during off-hours. However, it is desirable to have fresher data for business insights at near real-time. To this end, the authors propose to leverage a data warehouses capability to directly import raw, unprocessed records and defer the transformation and data cleaning until needed by pending reports. At that time, the databases own processing mechanisms can be deployed to process the data on-demand. Event-processing capabilities are seamlessly woven into our proposed architecture. Besides outlining an overall architecture, the authors also developed a roadmap for implementing a complete prototype using conventional database technology in the form of hierarchical materialized views.


extending database technology | 2006

Managing and Querying Versions of Multiversion Data Warehouse

Robert Wrembel; Tadeusz Morzy

A data warehouse (DW) is a database that integrates external data sources (EDSs) for the purpose of advanced data analysis. The methods of designing a DW usually assume that a DW has a static schema and structures of dimensions. In practice, schema and dimensions’ structures often change as the result of the evolution of EDSs, changes of the real world represented in a DW, new user requirements, new versions of software being installed, and system tuning activities. Examples of various change scenarios can be found in [1,8].


Archive | 2015

Computer Science and Its Applications : 5th IFIP TC 5 International Conference, CIIA 2015, Saida, Algeria, May 20-21, 2015 : Proceedings

Abdelmalek Amine; Ladjel Bellatreche; Zakaria Elberrichi; Erich J. Neuhold; Robert Wrembel

Interoperability is a qualitative property of computing infrastructures that denotes the ability of the sending and receiving systems to exchange and properly interpret information objects across system boundaries. Since this property is not given by default, the interoperability problem involves the representation of meaning and has been an active research topic for approximately four decades. Database models used schemas to express semantics and implicitly aimed at achieving interoperability by providing programming independence of data storage and access. After a number of intermediate steps such as Hypertext and XML document models, the notions of semantics and interoperability became what they have been over the last ten years in the context of the World Wide Web and more recently the concept of Open Linked Data. The talk will investigate the (reoccurring) problem of interoperability as it can be found in the massive data collections around the Big Data and Open Linked Data concepts. We investigate semantics and interoperability research from the point of view of information systems. It should give an overview of existing old and new interoperability techniques and point out future research directions, especially for concepts found in Open Linked Data, the Semantic WEB and Big Data. Brain-Computer-Brain Interfaces for Sensing and Subsequent Treatment Mohamad Sawan, Professor and Canada Research Chair Polystim Neurotechnologies Laboratory, Polytechnique Montreal [email protected] Abstract. Implantable Brain-Computer-Brain Interfaces (BCIs) for diagnostic and recovery of neural vital functions are promising alternative to study neural activities underlying cognitive functions and pathologies. This Keynote address covers the architecture of typical BCI intended for wireless neurorecording and neurostimulation. Massively parallel multichannel spike recording through large arrays of microelectrodes will be introduced. Attention will be paid to lowpower mixed-signal circuit design optimization. Advanced signal processing implementation such as adaptive thresholding, spike detection, data compression, and transmission will be described. Also, the talk includes Lab-on-chip technologies intended to build biosensors, and wireless data links and harvesting power to implants. Tests and validation of devices : electrical, mechanical, package, heat, reliability will be summarized. Case studies will be covered and include research activities dedicated to vision recovery through implant used to apply direct electrical microstimulation, to present the environment as phosphenes in the visual field of the blind. And we will summarize latest activities on locating epileptic seizures using multi-modal fNIRS/EEG processing, and will show the onset detecting seizure and techniques to stop it, using bioelectronic implant. Implantable Brain-Computer-Brain Interfaces (BCIs) for diagnostic and recovery of neural vital functions are promising alternative to study neural activities underlying cognitive functions and pathologies. This Keynote address covers the architecture of typical BCI intended for wireless neurorecording and neurostimulation. Massively parallel multichannel spike recording through large arrays of microelectrodes will be introduced. Attention will be paid to lowpower mixed-signal circuit design optimization. Advanced signal processing implementation such as adaptive thresholding, spike detection, data compression, and transmission will be described. Also, the talk includes Lab-on-chip technologies intended to build biosensors, and wireless data links and harvesting power to implants. Tests and validation of devices : electrical, mechanical, package, heat, reliability will be summarized. Case studies will be covered and include research activities dedicated to vision recovery through implant used to apply direct electrical microstimulation, to present the environment as phosphenes in the visual field of the blind. And we will summarize latest activities on locating epileptic seizures using multi-modal fNIRS/EEG processing, and will show the onset detecting seizure and techniques to stop it, using bioelectronic implant. Collaborative and Social Web Search

Collaboration


Dive into the Robert Wrembel's collaboration.

Top Co-Authors

Avatar

Tadeusz Morzy

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bartosz Bębel

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Esteban Zimanyi

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Mariusz Masewicz

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zbyszko Królikowski

Poznań University of Technology

View shared research outputs
Top Co-Authors

Avatar

Christian Koncilia

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Johann Eder

Alpen-Adria-Universität Klagenfurt

View shared research outputs
Top Co-Authors

Avatar

Alberto Abelló

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Besim Bilalli

Polytechnic University of Catalonia

View shared research outputs
Top Co-Authors

Avatar

Tomàs Aluja-Banet

Polytechnic University of Catalonia

View shared research outputs
Researchain Logo
Decentralizing Knowledge