Marin Silic
University of Zagreb
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marin Silic.
IEEE Transactions on Services Computing | 2014
Marin Silic; Goran Delac; Ivo Krka; Sinisa Srbljic
The modern information systems on the Internet are often implemented as composite services built from multiple atomic services. These atomic services have their interfaces publicly available while their inner structure is unknown. The quality of the composite service is dependent on both the availability of each atomic service and their appropriate orchestration. In this paper, we present LUCS, a formal model for predicting the availability of atomic web services that enhances the current state-of-the-art models used in service recommendation systems. LUCS estimates the service availability for an ongoing request by considering its similarity to prior requests according to the following dimensions: the users and services geographic location, the service load, and the services computational requirements. In order to evaluate our model, we conducted experiments on services deployed in different regions of the Amazon cloud. For each service, we varied the geographic origin of its incoming requests as well as the request frequency. The evaluation results suggest that our model significantly improves availability prediction when all of the LUCS input parameters are available, reducing the prediction error by 71 percent compared to the current state-of-the-art.
foundations of software engineering | 2013
Marin Silic; Goran Delac; Sinisa Srbljic
Contemporary web applications are often designed as composite services built by coordinating atomic services with the aim of providing the appropriate functionality. Although functional properties of each atomic service assure correct functionality of the entire application, nonfunctional properties such as availability, reliability, or security might significantly influence the user-perceived quality of the application. In this paper, we present CLUS, a model for reliability prediction of atomic web services that improves state-of-the-art approaches used in modern recommendation systems. CLUS predicts the reliability for the ongoing service invocation using the data collected from previous invocations. We improve the accuracy of the current state-of-the-art prediction models by considering user-, service- and environment-specific parameters of the invocation context. To address the computational performance related to scalability issues, we aggregate the available previous invocation data using K-means clustering algorithm. We evaluated our model by conducting experiments on services deployed in different regions of the Amazon cloud. The evaluation results suggest that our model improves both performance and accuracy of the prediction when compared to the current state-of-the-art models.
IEEE Transactions on Services Computing | 2015
Marin Silic; Goran Delac; Sinisa Srbljic
While constructing QoS-aware composite work-flows based on service oriented systems, it is necessary to assess nonfunctional properties of potential service selection candidates. In this paper, we present CLUS, a model for reliability prediction of atomic web services that estimates the reliability for an ongoing service invocation based on the data assembled from previous invocations. With the aim to improve the accuracy of the current state-of-the-art prediction models, we incorporate user-service-, and environment-specific parameters of the invocation context. To reduce the scalability issues present in the state-of-the-art approaches, we aggregate the past invocation data using K-means clustering algorithm. In order to evaluate different quality aspects of our model, we conducted experiments on services deployed in different regions of the Amazon cloud. The evaluation results confirm that our model produces more scalable and accurate predictions when compared to the current state-of-the-art approaches.
IEEE Transactions on Dependable and Secure Computing | 2015
Goran Delac; Marin Silic; Sinisa Srbljic
As SOA gains more traction through various implementations, building reliable service compositions remains one of the principal research concerns. Widely researched reliability assurance methods, often rely on applying redundancy or complex optimization strategies that can make them less applicable when it comes to designing service compositions on a larger scale. To address this issue, we propose a design time reliability improvement method that enables selective service composition improvements by focusing on the most reliability-critical workflow components, named weak points. With the aim of detecting most significant weak points, we introduce a method based on a suite of recommendation algorithms that leverage a belief network reliability model. The method is made scalable by using heuristic algorithms that achieve better computational performance at the cost of recommendation accuracy. Although less accurate heuristic algorithms on average require more improvement steps, they can achieve better overall performance in cases when the additional step-wise overhead of applying improvements is low. We confirm the soundness of the proposed solution by performing experiments on data sets of randomly generated service compositions.
international conference on web services | 2018
Adrian Satja Kurdija; Marin Silic; Goran Delac; Klemo Vladimir; Sinisa Srbljic
Modern service selection in a cloud has to consider multiple requests to various service classes by multiple users. Taking into account quality-of-service requirements such as response time, throughput, and reliability, as well as the processing capacities of the service instances, we devise an efficient algorithm for minimum-cost mapping of mutually independent requests to the corresponding service instances. The solution is based on reduction to transportation problems for which we compare the optimal and a suboptimal but faster solution, investigating the tradeoff. In comparison to the alternative service selection models, the evaluation results confirm the efficiency and scalability of the proposed approach(es).
Knowledge Based Systems | 2018
Adrian Satja Kurdija; Marin Silic; Klemo Vladimir; Goran Delac
Abstract Recommender systems based on collaborative filtering (CF) rely on datasets containing users’ taste preferences for various items. Accuracy of various prediction approaches depends on the amount of similarity between users and items in a dataset. As a heuristic estimate of this data quality aspect, which could serve as an indicator of the prediction ability, we define the Global User Correlation Measure (GUCM) and the Global Item Correlation Measure (GICM) of a dataset containing known user–item ratings. The proposed measures range from 0 to 1 and describe the quality of the dataset regarding the user–user and item–item similarities: a higher measure indicates more similar pairs and better prediction ability. The experiments show a correlation between the proposed measures and the accuracy of standard prediction models. The measures can be used to quickly estimate whether a dataset is suitable for collaborative filtering and whether we can expect high prediction accuracy of user-based or item-based CF approaches.
International Journal of Web and Grid Services | 2018
Adrian Satja Kurdija; Marin Silic; Sinisa Srbljic
We introduce a novel QoS prediction model as a real-time support for the selection of atomic service candidates based on their QoS properties while constructing composite applications. The proposed approach satisfies the following requirements: (i) fast and accurate prediction of QoS values and (ii) adaptability with respect to environment changes. The model precomputes the similarities between users and services using approximate matrix multiplication to reduce the time complexity. When calculating a prediction for a user-service pair, the model considers similar users and services, but enhances the prediction accuracy by incorporating the number of observed records. Time complexity is further reduced by storing the lists of similar users and services which are updated in real-time. The model adapts to the changing environment: newer records are set to have greater influence on the predictions. The experiments conducted on relevant service-oriented datasets show advantages of the proposed model in accuracy and time performance.
Automatika | 2018
Miroslav Popović; Klemo Vladimir; Marin Silic
ABSTRACT Mutual exclusion mechanisms, like semaphore and monitor, are fundamental tools used by software engineers to solve the race condition problem, ensure barrier, and achieve other workflow patterns. Introductory teachings on how parallel and concurrent processes compete over shared resources have the underlying working principles of the operating system and computer architecture as a starting point for learning the mutual exclusion concepts. Conventional teaching method focuses on lectures and solving race condition problem with counting semaphore in C programming language. Before applying conventional teaching method, we advocate the introduction of a social game scenario in teaching basic concepts of workers concurrently competing over a shared resource. We also introduce a simplified mutual exclusion assignment in which the implementation complexity is reduced by application of a specially designed graphical mechanism for mutual exclusion. Compared to a conventional method, the proposed experimental teaching method has a 15% higher success rate in solving race condition problem in C programming language. Regardless of additional steps introduced to make students familiar with the concepts of mutual exclusion, the experimental method is slightly advantageous when median time-on-task results are compared.
international convention on information and communication technology electronics and microelectronics | 2017
Andrea Drmic; Marin Silic; Goran Delac; Klemo Vladimir; Adrian Satja Kurdija
In this paper we evaluate the robustness of perceptual image hashing algorithms. The image hashing algorithms are often used for various objectives, such as images search and retrieval, finding similar images, finding duplicates and near-duplicates in a large collection of images, etc. In our research, we examine the image hashing algorithms for images identification on the Internet. Hence, our goal is to evaluate the most popular perceptual image hashing algorithms with the respect to ability to track and identify images on the Internet and popular social network sites. Our basic criteria for evaluation of hashing algorithms is robustness. We find a hashing algorithm robust if it can identify the original image upon visible modifications are performed, such as resizing, color and contrast change, text insertion, swirl etc. Also, we want a robust hashing algorithm to identify and track images once they get uploaded on popular social network sites such as Instagram, Facebook or Google+. To evaluate robustness of perceptual hashing algorithms, we prepared an image database and we performed various physical image modifications. To compare robustness of hashing algorithms, we computed Precision, Recall and F1 score for each competing algorithm. The obtained evaluation results strongly confirm that P-hash is the most robust perceptual hashing algorithm.
sighum workshop on language technology for cultural heritage social sciences and humanities | 2015
Klemo Vladimir; Marin Silic; Nenad Romic; Goran Delac; Sinisa Srbljic
Due to proliferation of digital publishing, e-book catalogs are abundant but noisy and unstructured. Tools for the digital librarian rely on ISBN, metadata embedded into digital files (without accepted standard) and cryptographic hash functions for the identification of coderivative or nearduplicate content. However, unreliability of metadata and sensitivity of hashing to even smallest changes prevents efficient detection of coderivative or similar digital books. Focus of the study are books with many versions that differ in certain amount of OCR errors and have a number of sentence-length variations. Identification of similar books is performed using small-sized fingerprints that can be easily shared and compared. We created synthetic datasets to evaluate fingerprinting accuracy while providing standard precision and recall measurements.