Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Boštjan Slivnik is active.

Publication


Featured researches published by Boštjan Slivnik.


parallel computing | 2005

The complexity of static data replication in data grids

Uroš ibej; Boštjan Slivnik; Borut Robič

Data replication is a well-known technique used in distributed computing to improve access to data and/or system fault-tolerance. Recently, studies of its applications to grid computing have also been initiated. In this article we describe data replication on data grids as a static optimization problem. We show that this problem is NP-hard and non-approximable. We discuss two approaches to solving it, i.e. integer programming and simplifications.


Computers in Biology and Medicine | 1998

Computer simulation and spatial modelling in heart surgery

Roman Trobec; Boštjan Slivnik; Borut Gersak; Tone Gabrijelčič

In this work, three dimensional modelling and computer simulation of heat transfer on generally-shaped nonhomogeneous bodies is proposed and described. The complexity of the calculation is estimated and the potential use of high performance parallel computers is discussed. The method is focused on applications in medicine. As an example, a numerical algorithm for the parallel computer simulation of heart cooling procedures during surgery is presented. On the basis of simulated results, two different methods of cooling are compared.


Information Processing Letters | 2005

Producing the left parse during bottom-up parsing

Boštjan Slivnik; Boštjan Vilfan

Schmeiser and Barnard described a method for producing the left parse at the end of the bottom-up parsing process. We improve their method in the sense that the left parse is actually produced during the bottom-up parsing process (i.e., with considerably less delay).


international convention on information and communication technology, electronics and microelectronics | 2014

Analysis of elective courses selection in post-Bologna programmes

Igor Rozanc; Boštjan Slivnik

One of the most important results of the Bologna reform at the Faculty of Computer and Information Science at the University of Ljubljana is a much larger number of elective courses. The elective courses of the academic study programme were grouped into modules while the professional study programme permits selection of individual elective courses under minor restrictions. Both approaches fulfilled the reform requirements. However, the academic study programme is more rigid and thus easier to carry out than the professional one but students prefer the flexibility of the latter. In this paper we analyze the actual data of elective course selection by students of both programs. First, we examine the link between different module selections to optimize the distribution of elective courses into modules. This examination is possible as a number of students, i.e., the most successful ones, were allowed to select courses regardless of modules. Second, we investigate whether some commonly selected groups of elective courses of the professional study programme could be identified in order to simplify the programme and its implementation. The results show some important improvements could be introduced.


acm symposium on applied computing | 2013

LLLR parsing

Boštjan Slivnik

The idea of an LLLR parsing is presented. An LLLR(k) parser can be constructed for any LR(k) grammar but it produces the left parse of the input string in linear time (in respect to the length of the derivation) without backtracking. If used as a basis for a syntax-directed translation, it triggers semantic actions using the top-down strategy just like the canonical LL(k) parser. Hence, from a compiler writers point of view, it acts as an LL(k) parser. The backbone of the LLLR(k) parser is the LL(k) parser which triggers the embedded left LR(k) parser whenever an LL(k) conflict appears during parsing. Once the embedded LR(k) parser resolves the conflict, it passes the control back to the backbone LL(k) parser together with the left parse of the part of the input string scanned by the embedded LR parser, and LL parsing continues. Hence, LLLR parsing is similar to LL(*) parsing except (a) it uses LR(k) parsers instead of finite automata to resolve the LL(k) conflicts and (b) that it does not need to use backtracking. An LLLR(k) parser is the most appropriate for grammars where the LL(k) conflicting nonterminals appear relatively close to the leaves of the derivation trees.


Computer Languages, Systems & Structures | 2017

On different LL and LR parsers used in LLLR parsing

Boštjan Slivnik

Abstract As described in Slivnik (2016), LLLR parsing is a method that parses as much of its input string as possible using the backbone SLL(k) parser and uses small embedded canonical left LR(k) parsers to resolve LL conflicts. Once the LL conflict is resolved, the embedded parser produces the left parse of the substring it has just parsed and passes the control back to the backbone parser together with the information about how the backbone parser should realign its stack as a part of the input has been read by the embedded parser. The LLLR parser produces the left parse of the input string without any backtracking and, if used for a syntax-directed translation, it evaluates semantic actions using the same top-down strategy as the canonical LL(k) parser. In this paper, a more general approach towards LLLR parsing is presented as it is described how any kind of canonical LL(k) or LA(k)LL(k′) parser can be used as the backbone parser and how different kinds of embedded canonical left LR(k) or left LA(k)LR(k′) parsers can be used for LL conflict resolution.


Software Quality Journal | 2016

Measuring the complexity of domain-specific languages developed using MDD

Boštjan Slivnik

The standard ISO/IEC 25010 (SQuaRE) defines appropriateness as one of the three components of functional suitability, the other two components being completeness and correctness. As users of domain-specific language (DSL) are quite often domain experts with limited programming skills, a DSL might be considered appropriate if the resulting domain-specific programs do not contain an excessive amount of nondomain-related programming elements. This paper describes a metric for measuring the appropriateness of DSLs that are developed using model-driven development (MDD), its evaluation and use. The metric measures the depth of the deepest domain-specific command within abstract syntax trees generated by a DSL. It is aimed at being used during the development of a new DSL and for comparing different DSLs defined over the same domain. It is assumed that during MDD, the metamodel describes the domain-independent part of the DSL, while the model supplies the domain-specific part. This resembles the implementation of DSLs using existing metaprogramming tools that provide off-the-shelf implementations of programming constructs but require manual implementation of the domain-specific language elements.


acm symposium on applied computing | 2014

Linter: a tool for finding bugs and potential problems in scala code

Matic Potočnik; Uros Cibej; Boštjan Slivnik

Linter is a static analysis tool for Scala. To check for possible bugs, inefficient code, and coding style problems it combines simple pattern matching used in many similar static analysis tools for other programming languages with abstract interpretation on some builtin types like integers and strings. Taking advantage of the Scala compiler plugin interface it relies on the Scala compiler (a) to parse the source code and (b) to provide the abstract syntax tree with all needed information. This paper provides an overview of Linter and its implementation. Using a case study the performance of Linter is evaluated in terms of time consumption and code issues detected.


international conference on numerical analysis and its applications | 1996

Coarse-Grain Parallelisation of Multi-Implicit Runge-Kutta Methods

Roman Trobec; Bojan Orel; Boštjan Slivnik

A parallel implementation for a multi-implicit Runge-Kutta method (MIRK) with real eigenvalues is decribed. The parallel method is analysed and the algorithm is devised. For the problem with d domains, the amount of work within the s-stage MIRK method, associated with the solution of system, is proportional to (sd)3, in contrast to the simple implicit finite difference method (IFD) where the amount of work is proportional to d3. However, it is shown that s-stage MIRK admits much greater time steps for the same order of error. Additionally, the proposed parallelisation transforms the system of the dimension sd to s independent sub-systems of dimension d. The amount of work for the sequential solution of such systems is proportional to sd3. The described parallel algorithm enables the solving of each of the s subsystems on a separate processor; finally, the amount of work is again d3, but the profit of a larger time step still remains. To test the theory, a comparative example of the 3-D heat transfer in a human heart with 643 domains is shown and numerically calculated by 3-stage MIRK.


computing in cardiology conference | 1995

The model of topical heart cooling during induced hypothermic cardiac arrest in open heart surgery

Borut Gersak; Roman Trobec; Tone Gabrijelčič; Boštjan Slivnik

Presents a new method of parallel computer simulation in cardiac surgery. The heat equation is transformed to a set of difference equations which are solved numerically by the finite difference method. The authors are considering the level of moderate hypothermia with esophageal temperature of 28/spl deg/C. The average initial heart temperature after cardioplegia is supposed to be 11/spl deg/C. Two types of topical cooling are considered: first, the temperature of the cooling liquid (TCL) is held constant on 0.2/spl deg/C with a heat exchanger, and second, the initial TCL is set to 0.2/spl deg/C and no heat exchanger is used. The heart modelling method is based on a series of axial-slice images obtained by a human heart CT-scan. After definition of different substances the images are digitalised with the requested resolution and compounded into a three dimensional body. Using a parallel computer with 16 high speed processors, the resulting calculation is approximately ten times slower than the real cooling process in nature.

Collaboration


Dive into the Boštjan Slivnik's collaboration.

Top Co-Authors

Avatar

Roman Trobec

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Borut Robič

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Igor Rozanc

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Borut Gersak

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bojan Orel

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Uros Cibej

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar

Uroš ibej

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge