Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ilia Petrov is active.

Publication


Featured researches published by Ilia Petrov.


enterprise distributed object computing | 2013

Towards Service-Oriented Enterprise Architectures for Big Data Applications in the Cloud

Alfred Zimmermann; Michael Pretz; Gertrud Zimmermann; Donald Firesmith; Ilia Petrov; Eman El-Sheikh

Applications with Service-oriented Enterprise Architectures in the Cloud are emerging and will shape future trends in technology and communication. The development of such applications integrates Enterprise Architecture and Management with Architectures for Services & Cloud Computing, Web Services, Semantics and Knowledge-based Systems, Big Data Management, among other Architecture Frameworks and Software Engineering Methods. In the present work in progress research, we explore Service-oriented Enterprise Architectures and application systems in the context of Big Data applications in cloud settings. Using a Big Data scenario, we investigate the integration of Services and Cloud Computing architectures with new capabilities of Enterprise Architectures and Management. The underlying architecture reference model can be used to support semantic analysis and program comprehension of service-oriented Big Data Applications. Enterprise Services Computing is the current trend for powerful large-scale information systems, which increasingly converge with Cloud Computing environments. In this paper we combine architectures for services with cloud computing. We propose a new integration model for service-oriented Enterprise Architectures on basis of ESARC - Enterprise Services Architecture Reference Cube, which is our previous developed service-oriented enterprise architecture classification framework, with MFESA - Method Framework for Engineering System Architectures - for the design of service-oriented enterprise architectures, and the systematic development, diagnostics and optimization of architecture artifacts of service-oriented cloud-based enterprise systems for Big Data applications.


From active data management to event-based systems and more | 2010

Aspects of data-intensive cloud computing

Sebastian Frischbier; Ilia Petrov

The concept of Cloud Computing is by now at the peak of public attention and adoption. Driven by several economic and technological enablers, Cloud Computing is going to change the way we have to design, maintain and optimise large-scale data-intensive software systems in the future. Moving large-scale, data-intensive systems into the Cloud may not always be possible, but would solve many of todays typical problems. In this paper we focus on the opportunities and restrictions of current Cloud solutions regarding the data model of such software systems. We identify the technological issues coming along with this new paradigm and discuss the requirements to be met by Cloud solutions in order to provide a meaningful alternative to on-premise configurations.


data management on new hardware | 2012

Making cost-based query optimization asymmetry-aware

Daniel Bausch; Ilia Petrov; Alejandro P. Buchmann

The architecture and algorithms of database systems have been built around the properties of existing hardware technologies. Many such elementary design assumptions are 20--30 years old. Over the last five years we witness multiple new I/O technologies (e.g. Flash SSDs, NV-Memories) that have the potential of changing these assumptions. Some of the key technological differences to traditional spinning disk storage are: (i) asymmetric read/write performance; (ii) low latencies; (iii) fast random reads; (iv) endurance issues. Cost functions used by traditional database query optimizers are directly influenced by these properties. Most cost functions estimate the cost of algorithms based on metrics such as sequential and random I/O costs besides CPU and memory consumption. These do not account for asymmetry or high random read and inferior random write performance, which represents a significant mismatch. In the present paper we show a new asymmetry-aware cost model for Flash SSDs with adapted cost functions for algorithms such as external sort, hash-join, sequential scan, index scan, etc. It has been implemented in PostgreSQL and tested with TPC-H. Additionally we describe a tool that automatically finds good settings for the base coefficients of cost models. After tuning the configuration of both the original and the asymmetry-aware cost model with that tool, the optimizer with the asymmetry-aware cost model selects faster execution plans for 14 out of the 22 TPC-H queries (the rest being the same or negligibly worse). We achieve an overall performance improvement of 48% on SSD.


very large data bases | 2013

NoFTL: database systems on FTL-less flash storage

Sergey Hardock; Ilia Petrov; Robert Gottstein; Alejandro P. Buchmann

The database architecture and workhorse algorithms have been designed to compensate for hard disk properties. The I/O characteristics of Flash memories have significant impact on database systems and many algorithms and approaches taking advantage of those have been proposed recently. Nonetheless on system level Flash storage devices are still treated as HDD compatible block devices, black boxes and fast HDD replacements. This backwards compatibility (both software and hardware) masks the native behaviour, incurs significant complexity and decreases I/O performance, making it non-robust and unpredictable. Database systems have a long tradition of operating directly on RAW storage natively, utilising the physical characteristics of storage media to improve performance. In this paper we demonstrate an approach called NoFTL that goes a step further. We show that allowing for native Flash access and integrating parts of the FTL functionality into the database system yields significant performance increase and simplification of the I/O stack. We created a real-time data-driven Flash emulator and integrated it accordingly into Shore-MT. We demonstrate a performance improvement of up to 3.7× compared to Shore-MT on RAW block-device Flash storage under various TPC workloads.


international conference on service oriented computing | 2010

Event-driven services: integrating production, logistics and transportation

Alejandro P. Buchmann; H.-Chr. Pfohl; Stefan Appel; Tobias Freudenreich; Sebastian Frischbier; Ilia Petrov; Christian Zuber

Todays production processes are characterized by global supply chains, short lifecycles, and an increasing personalization of goods. To satisfy the demands for agility we must integrate the production with the logistics processes and knowledge about the underlying transportation services and infrastructure. This requires continuous monitoring and reacting to events. Service-oriented architectures have provided a platform for structuring services within and across enterprises. However, for an effective monitoring and timely reaction to emerging situations it is crucial to integrate event processing and service orientation. In this position paper we show how event processing and service orientation can be combined into an effective delivery platform for an integrated coordination of the flow of goods. We show how simple events, e.g. RFID tag detections or simple sensor readings, can be integrated into abstract events that are meaningful to invoke logistics services and improve the celerity of responses. We propose filtering, aggregating, and on-the-fly analysis of the continuous flow of events and make events persistent in an event warehouse for auditability and input to future planning processes.


international conference on conceptual modeling | 2004

iRM: An OMG MOF Based Repository System with Querying Capabilities

Ilia Petrov; Stefan Jablonski; Marc Holze; Gabor Nemes; Marcus Schneider

In this work we present iRM – an OMG MOF-compliant repository system that acts as custom-defined application or system catalogue. iRM enforces structural integrity using a novel approach. iRM provides declarative querying support. iRM finds use in evolving data intensive applications, and in fields where integration of heterogeneous models is needed.


british national conference on databases | 2013

Append storage in multi-version databases on flash

Robert Gottstein; Ilia Petrov; Alejandro P. Buchmann

Append/Log-based Storage and Multi-Version Database Management Systems (MV-DBMS) are gaining significant importance on new storage hardware technologies such as Flash and Non-Volatile Memories. Any modification of a data item in a MV-DBMS results in the creation of a new version. Traditional implementations, physically stamp old versions as invalidated, causing in-place updates resulting in random writes and ultimately in mixed loads, all of which are suboptimal for new storage technologies. Log-/Append-based Storage Managers (LbSM) insert new or modified data at the logical end of log-organised storage, converting in-place updates into small sequential appends. We claim that the combination of multi-versioning and append storage effectively addresses the characteristics of modern storage technologies. We explore to what extent multi-versioning approaches such as Snapshot Isolation (SI) can benefit from Append-Based storage, and how a Flash-optimised approach called SIAS (Snapshot Isolation Append Storage) can improve performance. While traditional LbSM use coarse-grain page append granularity, SIAS performs appends in tuple-version granularity and manages versions as simply linked lists, thus avoiding in-place invalidations. Our experimental results instrumenting a SSD with TPC-C generated OLTP load patterns show that: a) traditional LbSM approaches are up to 73% faster than their in-place update counterparts; b) SIAS tuple-version granularity append is up to 2.99x faster (IOPS and runtime) than in-place update storage managers; c) SIAS reduces the write overhead up to 52 times; d) in SIAS using exclusive append regions per relation is up to 5% faster than using one append region for all relations; e) SIAS I/O performance scales with growing parallelism, whereas traditional approaches reach early saturation.


information integration and web-based applications & services | 2008

Architecture of OMG MOF-based repository systems

Ilia Petrov; Alejandro P. Buchmann

Metadata repository systems store metadata in the form of models and meta-models. In this paper we introduce a general architecture of a MOF repository system and describe its modules. In addition, we examine the architectures of several existing MOF repositories such as MDR, EMF, dMOF and iRM and illustrate how these related to the proposed general architecture.


tpc technology conference | 2011

SI-CV: snapshot isolation with co-located versions

Robert Gottstein; Ilia Petrov; Alejandro P. Buchmann

Snapshot Isolation is an established concurrency control algorithm, where each transaction executes against its own version/snapshot of the database. Version management may produce unnecessary random writes. Compared to magnetic disks Flash storage offers fundamentally different IO characteristics, e.g. excellent random read, low random write performance and strong read/write asymmetry. Therefore the performance of snapshot isolation can be improved by minimizing the random writes. We propose a variant of snapshot isolation (called SI-CV) that collocates tuple versions created by a transaction in adjacent blocks and therefore minimizes random writes at the cost of random reads. Its performance, relative to the original algorithm, in overloaded systems under heavy transactional loads in TPC-C scenarios on Flash SSD storage increases significantly. At high loads that bring the original system into overload, the transactional throughput of SI-CV increases further, while maintaining response times that are multiple factors lower.


database and expert systems applications | 2011

On the Performance of Database Query Processing Algorithms on Flash Solid State Disks

Daniel Bausch; Ilia Petrov; Alejandro P. Buchmann

Flash Solid State Disks induce a drastic change in storage technology that impacts database systems. Flash memories exhibit low latency (especially for small block sizes), very high random read and low random write throughput, and significant asymmetry between the read and write performance. These properties influence the performance of database join algorithms and ultimately the cost assumptions in the query optimizer. In this paper we examine the performance of different join algorithms available in Postgre SQL on SSD and magnetic drives. We observe that (a) point queries exhibit the best performance improvement of up to fifty times, (b) range queries benefit less from the properties of SSDs, (c) join algorithms behave differently depending on how well they match the properties of solid state disks or magnetic drives.

Collaboration


Dive into the Ilia Petrov's collaboration.

Top Co-Authors

Avatar

Alejandro P. Buchmann

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Stefan Jablonski

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Robert Gottstein

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Christian Meiler

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Udo Mayer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Sergey Hardock

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pablo Ezequiel Guerrero

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Daniel Bausch

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge