Klemo Vladimir
University of Zagreb
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Klemo Vladimir.
Expert Systems With Applications | 2015
Klemo Vladimir; Ivan Budiselic; Sinisa Srbljic
We present peer-tutor recommender system for service composition.Peer-tutoring process is based on socially-intelligent computing.Expert peer tutors are identified by analysis of existing service compositions.The peer tutoring system was implemented and evaluated through a consumer study.Results show significant improvements in terms of consumer performance and QoE. With continued development towards the Internet of Things, services are making their way from enterprise solutions to our offices and homes. This process is a major driving force in consumerization of IT, because sustainable application development at this scale will not be possible without direct involvement and innovation from consumers themselves. In this paper, we present our work on consumerization of service composition tools. First, we describe how consumer-facing services can be presented in a usable and intuitive way. Then, combining social computing with machine intelligence, we define a recommender system that supports consumers in sharing their knowledge and creativity in peer-tutored service composition, thus empowering consumers to create their own applications. This system recommends consumers with the required service composition knowledge based on mining procedural knowledge stored in previously defined compositions. Once such a group of consumers is identified, social computing tools are used to allow them to share this knowledge with their peers. To demonstrate the effectiveness of this peer-tutored service composition model, we performed consumer satisfaction studies on our consumerized service composition tool Geppeto, which we extended with the described recommender system. Results show significant improvements in service composition in terms of performance and quality of experience.
Expert Systems With Applications | 2015
Ivan Budiselic; Klemo Vladimir; Sinisa Srbljic
We address the problem of component discovery in composition environments.A general method for component recommendation is presented.Component recommendation is based on structural analysis of compositions.A graph-based component recommender algorithm is proposed and evaluated.Results show advantages in recommender quality compared to a CF recommender. Support for component discovery has been identified as a key challenge in various forms of composite application development. In this paper, we describe a general method for component recommendation based on structural similarity of compositions. The method dynamically ranks and recommends components as a composition is incrementally developed. Recommendations are based on structural comparison of the partial composition begin developed with a database of previously completed compositions. Using this method, we define a probabilistic graph edit distance algorithm for component recommendation. We evaluate the accuracy, catalog coverage and response time of the presented algorithm and compare it to a neighborhood-based collaborative filtering approach and two simple statistical algorithms. The evaluation is performed on a Yahoo Pipes dataset and a synthetic dataset that models more complex composite applications. The results show that the proposed algorithm is competitive with the collaborative filtering algorithm in accuracy and outperforms it significantly in coverage. The results on the synthetic dataset suggest that the presented approach can be applied successfully to other composition environments where there is regularity in how components are connected.
international conference on web services | 2018
Adrian Satja Kurdija; Marin Silic; Goran Delac; Klemo Vladimir; Sinisa Srbljic
Modern service selection in a cloud has to consider multiple requests to various service classes by multiple users. Taking into account quality-of-service requirements such as response time, throughput, and reliability, as well as the processing capacities of the service instances, we devise an efficient algorithm for minimum-cost mapping of mutually independent requests to the corresponding service instances. The solution is based on reduction to transportation problems for which we compare the optimal and a suboptimal but faster solution, investigating the tradeoff. In comparison to the alternative service selection models, the evaluation results confirm the efficiency and scalability of the proposed approach(es).
Knowledge Based Systems | 2018
Adrian Satja Kurdija; Marin Silic; Klemo Vladimir; Goran Delac
Abstract Recommender systems based on collaborative filtering (CF) rely on datasets containing users’ taste preferences for various items. Accuracy of various prediction approaches depends on the amount of similarity between users and items in a dataset. As a heuristic estimate of this data quality aspect, which could serve as an indicator of the prediction ability, we define the Global User Correlation Measure (GUCM) and the Global Item Correlation Measure (GICM) of a dataset containing known user–item ratings. The proposed measures range from 0 to 1 and describe the quality of the dataset regarding the user–user and item–item similarities: a higher measure indicates more similar pairs and better prediction ability. The experiments show a correlation between the proposed measures and the accuracy of standard prediction models. The measures can be used to quickly estimate whether a dataset is suitable for collaborative filtering and whether we can expect high prediction accuracy of user-based or item-based CF approaches.
Automatika | 2018
Miroslav Popović; Klemo Vladimir; Marin Silic
ABSTRACT Mutual exclusion mechanisms, like semaphore and monitor, are fundamental tools used by software engineers to solve the race condition problem, ensure barrier, and achieve other workflow patterns. Introductory teachings on how parallel and concurrent processes compete over shared resources have the underlying working principles of the operating system and computer architecture as a starting point for learning the mutual exclusion concepts. Conventional teaching method focuses on lectures and solving race condition problem with counting semaphore in C programming language. Before applying conventional teaching method, we advocate the introduction of a social game scenario in teaching basic concepts of workers concurrently competing over a shared resource. We also introduce a simplified mutual exclusion assignment in which the implementation complexity is reduced by application of a specially designed graphical mechanism for mutual exclusion. Compared to a conventional method, the proposed experimental teaching method has a 15% higher success rate in solving race condition problem in C programming language. Regardless of additional steps introduced to make students familiar with the concepts of mutual exclusion, the experimental method is slightly advantageous when median time-on-task results are compared.
international convention on information and communication technology electronics and microelectronics | 2017
Andrea Drmic; Marin Silic; Goran Delac; Klemo Vladimir; Adrian Satja Kurdija
In this paper we evaluate the robustness of perceptual image hashing algorithms. The image hashing algorithms are often used for various objectives, such as images search and retrieval, finding similar images, finding duplicates and near-duplicates in a large collection of images, etc. In our research, we examine the image hashing algorithms for images identification on the Internet. Hence, our goal is to evaluate the most popular perceptual image hashing algorithms with the respect to ability to track and identify images on the Internet and popular social network sites. Our basic criteria for evaluation of hashing algorithms is robustness. We find a hashing algorithm robust if it can identify the original image upon visible modifications are performed, such as resizing, color and contrast change, text insertion, swirl etc. Also, we want a robust hashing algorithm to identify and track images once they get uploaded on popular social network sites such as Instagram, Facebook or Google+. To evaluate robustness of perceptual hashing algorithms, we prepared an image database and we performed various physical image modifications. To compare robustness of hashing algorithms, we computed Precision, Recall and F1 score for each competing algorithm. The obtained evaluation results strongly confirm that P-hash is the most robust perceptual hashing algorithm.
sighum workshop on language technology for cultural heritage social sciences and humanities | 2015
Klemo Vladimir; Marin Silic; Nenad Romic; Goran Delac; Sinisa Srbljic
Due to proliferation of digital publishing, e-book catalogs are abundant but noisy and unstructured. Tools for the digital librarian rely on ISBN, metadata embedded into digital files (without accepted standard) and cryptographic hash functions for the identification of coderivative or nearduplicate content. However, unreliability of metadata and sensitivity of hashing to even smallest changes prevents efficient detection of coderivative or similar digital books. Focus of the study are books with many versions that differ in certain amount of OCR errors and have a number of sentence-length variations. Identification of similar books is performed using small-sized fingerprints that can be easily shared and compared. We created synthetic datasets to evaluate fingerprinting accuracy while providing standard precision and recall measurements.
international convention on information and communication technology, electronics and microelectronics | 2014
Ivan Budiselic; Goran Delac; Klemo Vladimir
The aim of this paper is to show that an accurate and efficient text classifier for relatively simple problem domains can be created in only a few hours of development time. The motivating example discussed in the paper is a recent HackerRank competition problem that tasked competitors with creating a classifier for questions from the popular question and answer platform StackExchange. The paper describes the key components of one solution to this problem, and briefly overviews the naive Bayes classifier that is the basis of the solution. The discussion is focused on feature selection and example representation which were the key challenges to be addressed during the development of this classifier. We also analyze the effect of the number of features on accuracy, training and classification time and the size of the resulting classifier and the representation of the training examples which were all important characteristics for the competition. The described classifier achieved slightly over 89% accuracy on the hidden question set, while the winning submission achieved around 92%.
conference on computer as a tool | 2013
Klemo Vladimir; Zvonimir Pavlic; Sinisa Srbljic
In this paper, we present Erl-metafeed, a mashup engine for web feeds and its application as widget toolkit backend system for real-time remixing of web feeds. The emerging real-time web is based on web feeds-streams of real-time information usually implemented using RSS/Atom protocols. Since real-time content is torrential, web users who follow a large number of web feeds usually overwhelm their feed-readers with content they are not able to handle in a reasonable time. The goal of the Erl-metafeed is to provide a platform for scalable management of web feeds in near real-time. The prototype is implemented in Erlang programming language because of its concurrency support and soft real-time execution. Erl-metafeed is programmable using custom query language available via web interface. Additionally, since textual query language is not suitable for end-users, we designed and implemented set of web widgets on top of the Geppeto programming environment. Geppeto environment enables intuitive composition of widgets into powerful mashups, thus circumventing need for learning query language.
enterprise distributed object computing | 2007
Daniel Skrobo; Klemo Vladimir; Sinisa Srbljic
Collecting data on user activities is one of the fundamental middleware services in Web-enabled systems. The collected data is analyzed and used by various high-level services, like user profiling, accounting, security auditing, and system health monitoring. In this paper, we present architecture and performance evaluation of usage tracking components for service-oriented middleware systems. Presented middleware components are designed as loosely-coupled usage tracking services, which brings two important benefits. Usage tracking services can be seamlessly integrated with various service-oriented systems without disturbing their operation. Since usage tracking services are loosely-coupled, system users can dynamically deploy and manage multiple usage tracking configurations.