Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matjaz B. Juric is active.

Publication


Featured researches published by Matjaz B. Juric.


Sensors | 2015

Inertial Sensor-Based Gait Recognition: A Review

Sebastijan Sprager; Matjaz B. Juric

With the recent development of microelectromechanical systems (MEMS), inertial sensors have become widely used in the research of wearable gait analysis due to several factors, such as being easy-to-use and low-cost. Considering the fact that each individual has a unique way of walking, inertial sensors can be applied to the problem of gait recognition where assessed gait can be interpreted as a biometric trait. Thus, inertial sensor-based gait recognition has a great potential to play an important role in many security-related applications. Since inertial sensors are included in smart devices that are nowadays present at every step, inertial sensor-based gait recognition has become very attractive and emerging field of research that has provided many interesting discoveries recently. This paper provides a thorough and systematic review of current state-of-the-art in this field of research. Review procedure has revealed that the latest advanced inertial sensor-based gait recognition approaches are able to sufficiently recognise the users when relying on inertial data obtained during gait by single commercially available smart device in controlled circumstances, including fixed placement and small variations in gait. Furthermore, these approaches have also revealed considerable breakthrough by realistic use in uncontrolled circumstances, showing great potential for their further development and wide applicability.


Future Generation Computer Systems | 2013

Towards a unified taxonomy and architecture of cloud frameworks

Robert Dukaric; Matjaz B. Juric

Infrastructure as a Service (IaaS) is one of the most important layers of Cloud Computing. However, there is an evident deficiency of mechanisms for analysis, comparison and evaluation of IaaS cloud implementations, since no unified taxonomy or reference architecture is available. In this article, we propose a unified taxonomy and an IaaS architectural framework. The taxonomy is structured around seven layers: core service layer, support layer, value-added services, control layer, management layer, security layer and resource abstraction. We survey various IaaS systems and map them onto our taxonomy to evaluate the classification. We then introduce an IaaS architectural framework that relies on the unified taxonomy. We provide a detailed description of each layer and define dependencies between the layers and components. Finally, we evaluate the proposed IaaS architectural framework on several real-world projects, while performing a comprehensive analysis of the most important commercial and open-source IaaS products. The evaluation results show notable distinction of feature support and capabilities between commercial and open-source IaaS platforms, significant deficiency of important architectural components in terms of fulfilling true promise of infrastructure clouds, and real-world usability of the proposed taxonomy and architectural framework.


Journal of Visual Languages and Computing | 2012

Modeling functional requirements for configurable content- and context-aware dynamic service selection in business process models

Ales Frece; Matjaz B. Juric

In this article, we propose a meta-model for formal specification of functional requirements for configurable content- and context-aware dynamic service selection in business process models with the objective to enable greater flexibility of the modeled processes. The dynamic service selection can cope with highly dynamic business environments that todays business processes must handle. Modeling functional requirements for dynamic service selection in business process models is not well covered in literature. Some partial solutions exist but none of them allows modeling a complete set of functional requirements for the selection similar to the one we are addressing in this article. Our meta-model enables formal specification of service selection relevant data extracted from service request message, custom configuration data (e.g., thresholds), process and task definition/instance metadata, and service selection rules. The meta-model is configurable and content- and context-aware. Processes leveraging our meta-model can adapt to changing requirements without redesign of the process flow. Proposed meta-model allows users to additionally configure the models at run time (e.g., raising a threshold). Modeling can be divided into roles with different required competences. We implement our meta-model in BPMN 2.0 (Business Process Model and Notation) through specific extensions to the BPMN semantic and diagram elements. By measuring complexity of real-world sample process models we show that using our solution modelers can efficiently model business processes that need to address frequent changing demands. Compared to available alternatives, models using our solution have on average ~13% fewer activities, ~16% fewer control-flow elements and ~22% fewer control paths. By reading ~10% smaller models (by volume) model readers get more flexible process models that capture all functional requirements for the dynamic selection.


IEEE Transactions on Services Computing | 2014

Towards Complex Event Aware Services as Part of SOA

Martin Potocnik; Matjaz B. Juric

Complex Event Processing (CEP) has so far been implemented in technology and vendor-specific manner. Introducing CEP concepts to the Service Oriented Architecture (SOA) provides an opportunity to enhance the capabilities of SOA. We define a model that supports the CEP usage in SOA where the actual pattern recognition can be done by any external CEP Engine. We define a new service type-a Complex Event Aware (CEA) service that automatically reacts to complex events specified in its interface. The proposed model includes a CEP Manager that provides centralized management of complex events and, through its pluggable adapters, communicates with CEA Services and CEP Engines. It includes a CEP Registry and a CEP Repository enabling versioning and reuse of complex event types, and a CEP Dispatcher providing publish/subscribe communication framework. We design a generic XML schema for abstract complex event type definition and propose new extensions for Service Component Architecture (SCA) and Web Services Description Language (WSDL) specifications, which enable definitions of complex event types and complex event sinks in the CEA Service interface. As a proof-of-concept, we develop a prototype implementation for the largest national telecommunication provider and in the real-world scenario show the advantages of the proposed model.


Journal of Network and Computer Applications | 2015

AME-WPC: Advanced model for efficient workload prediction in the cloud

Katja Cetinski; Matjaz B. Juric

Abstract Workload estimation and prediction has become a very relevant research area in the field of cloud computing. The reason lies in its many benefits, which include QoS (Quality of Service) satisfaction, automatic resource scaling, and job/task scheduling. It is very difficult to accurately predict the workload of cloud applications if they are varying drastically. To address this issue, existing solutions use either statistical methods, which effectively detect repeating patterns but provide poor accuracy for long-term predictions, or learning methods, which develop a complex prediction model but are mostly unable to detect unusual patterns. Some solutions use a combination of both methods. However, none of them address the issue of gathering system-specific information in order to improve prediction accuracy. We propose an Advanced Model for Efficient Workload Prediction in the Cloud (AME-WPC), which combines statistical and learning methods, improves accuracy of workload prediction for cloud computing applications and can be dynamically adapted to a particular system. The learning methods use an extended training dataset, which we define through the analysis of the system factors that have a strong influence on the application workload. We address the workload prediction problem with classification as well as regression and test our solution with the machine-learning method Random Forest on both – basic and extended – training data. To evaluate our proposed model, we compare empirical tests with the machine-learning method kNN (k-Nearest Neighbors). Experimental results demonstrate that combining statistical and learning methods makes sense and can significantly improve prediction accuracy of workload over time.


IEEE Transactions on Information Forensics and Security | 2015

An Efficient HOS-Based Gait Authentication of Accelerometer Data

Sebastijan Sprager; Matjaz B. Juric

We propose a novel efficient and reliable gait authentication approach. It is based on the analysis of accelerometer signals using higher order statistics. Gait patterns are obtained by transformation of acceleration data in feature space represented with higher order cumulants. The proposed approach is able to operate on multichannel and multisensor data by combining feature-level and sensor-level fusion. Evaluation of the proposed approach was performed using the largest currently available data set OU-ISIR containing inertial data of 744 subjects. Authentication was performed by cross-comparison of gallery and probe gait patterns transformed in feature space. In addition, the proposed approach was evaluated using data set collected by McGill University, containing long-sequence acceleration signals of 20 subjects acquired by smartphone during casual walking. The results have shown an average equal error rate of 6% to 12%, depending on the selected experimental parameters and setup. When compared with the latest state of the art, evaluated performance reveal the proposed approach as one of the most efficient and reliable of the currently available accelerometer-based gait authentication approaches.


Information & Software Technology | 2013

Context aware exception handling in business process execution language

Jurij Laznik; Matjaz B. Juric

Context: Fault handling represents a very important aspect of business process functioning. However, fault handling has thus far been solved statically, requiring the definition of fault handlers and handling logic to be defined at design time, which requires a great deal of effort, is error-prone and relatively difficult to maintain and extend. It is sometimes even impossible to define all fault handlers at design time. Objective: To address this issue, we describe a novel context-aware architecture for fault handling in executable business process, which enables dynamic fault handling during business process execution. Method: We performed analysis of existing fault handling disadvantages of WS-BPEL. We designed the artifact which complements existing statically defined fault handling in such a way that faults can be defined dynamically during business process run-time. We evaluated the artifact with analysis of system performance and performed a comparison against a set of well known workflow exception handling patterns. Results: We designed an artifact, that comprises an Observer component, Exception Handler Bus, Exception Knowledge Base and Solution Repository. A system performance analysis shows a significantly decreased repair time with the use of context aware activities. We proved that the designed artifact extends the range of supported workflow exception handling patterns. Conclusion: The artifact presented in this research considerably improves static fault handling, as it enables the dynamic fault resolution of semantically similar faults with continuous enhancement of fault handling in run-time. It also results in broader support of workflow exception handling patterns.


Sensors | 2016

A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

Jure Tuta; Matjaz B. Juric

This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.


Computer Languages, Systems & Structures | 2012

Data-bound variables for WS-BPEL executable processes

Marcel Krizevnik; Matjaz B. Juric

Standard BPEL (Business Process Execution Language) variables, if used to store the data from a data store, cannot be automatically synchronized with the data source in case other applications change the data during the BPEL process execution, which is a common occurrence particularly for long-running BPEL processes. BPEL also does not provide a mechanism for active monitoring of changes of data that would support automated detection and handling of such changes. This paper proposes a new type of BPEL variables, called data-bound variables. Data-bound variables are automatically synchronized with the data source and thus eliminate the need to implement data synchronization manually. To provide support for data-bound variables, we propose specific extensions to BPEL and the use of appropriate Data Access Services (DAS) that act as data providers. We introduce new BPEL activities to load, create and delete remote data. We also introduce observed properties, observed property groups and a variable handler. Using this mechanism, the BPEL process is able to automatically adapt to changes to data, made inside or outside the process scope, by following the Event, Condition, Action (ECA) paradigm. As a proof-of-concept, we have developed a prototype implementation of our proposed BPEL extensions and tested it by implementing three pilot projects. We have confirmed that our proposed solution decreases BPEL process size and complexity, increases readability and reduces semantic gap between BPMN process model and BPEL.


Electronic Commerce Research | 2015

TACO: a novel method for trust rating subjectivity elimination based on Trust Attitudes COmparison

Eva Zupancic; Matjaz B. Juric

Trust ratings shared by users in electronic commerce environments are subjective as trust evaluation depends on evaluators’ personal disposition to trust. As such, aggregation of shared trust ratings to compute a user’s reputation may be questionable without proper consideration of rating subjectivity. Although the problem of subjectivity in trust opinions has already been recognized, it has not been adequately resolved so far. In this paper, we address the problem of proper trust rating analysis and aggregation, which includes elimination of subjectivity. We propose a novel method based on Trust Attitudes COmparison (TACO method), which derives adjusted reputations compliant with the behavioral patterns of the evaluators and eliminates the subjectivity from the trust ratings. With the TACO method, all participants have comparable opportunities to choose trustworthy transaction partners, regardless of their trust dispositions. The TACO method finds the users with similar trust attitudes, taking advantage of nonparametric statistical methods. After that, it computes the personalized reputation scores of other users with the aggregation of trust values shared by users with similar trust attitudes. The method derives the characteristics of participants’ trust dispositions implicitly from their past ratings and does not request them to disclose any part of their trust evaluation process, such as motivating criteria for trust assessments, underlying beliefs, or criteria preferences. We have evaluated the performance of our method with extensive simulations with varying numbers of users, different numbers of available trust ratings, and with different distributions of users’ personalities. The results showed significant improvements using our TACO method with an average improvement of 50.0xa0% over the Abdul-Rahman and 72.9xa0% over the Hasan method.

Collaboration


Dive into the Matjaz B. Juric's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eva Zupancic

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregor Srdic

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roman Trobec

University of Ljubljana

View shared research outputs
Researchain Logo
Decentralizing Knowledge