Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dragi Kimovski is active.

Publication


Featured researches published by Dragi Kimovski.


Nature Communications | 2014

ERG induces taxane resistance in castration-resistant prostate cancer

Giuseppe Galletti; Alexandre Matov; Himisha Beltran; Jacqueline Fontugne; Juan Miguel Mosquera; Cynthia Cheung; Theresa Y. MacDonald; Matthew Sung; Sandra A. O’Toole; James G. Kench; Sung Suk Chae; Dragi Kimovski; Scott T. Tagawa; David M. Nanus; Mark A. Rubin; Lisa G. Horvath; Paraskevi Giannakakou; David S. Rickman

Taxanes are the only chemotherapies used to treat patients with metastatic castration-resistant prostate cancer (CRPC). Despite the initial efficacy of taxanes in treating CRPC, all patients ultimately fail due to the development of drug resistance. In this study, we show that ERG overexpression in in vitro and in vivo models of CRPC is associated with decreased sensitivity to taxanes. ERG affects several parameters of microtubule dynamics and inhibits effective drug-target engagement of docetaxel or cabazitaxel with tubulin. Finally, analysis of a cohort of 34 men with metastatic CRPC treated with docetaxel chemotherapy reveals that ERG-overexpressing prostate cancers have twice the chance of docetaxel resistance than ERG-negative cancers. Our data suggest that ERG plays a role beyond regulating gene expression and functions outside the nucleus to cooperate with tubulin towards taxane insensitivity. Determining ERG rearrangement status may aid in patient selection for docetaxel or cabazitaxel therapy and/or influence co-targeting approaches.


Concurrency and Computation: Practice and Experience | 2015

Leveraging cooperation for parallel multi-objective feature selection in high-dimensional EEG data

Dragi Kimovski; Julio Ortega; Andrés Ortiz; Raul Baños

Bioinformatics applications frequently involve high‐dimensional model building or classification problems that require reducing dimensionality to improve learning accuracy while irrelevant inputs are removed. Thus, feature selection has become an important issue on these applications. Moreover, several approaches for supervised and unsupervised feature selections as a multi‐objective optimization problem have been recently proposed to cope with issues on performance evaluation of classifiers and models. As parallel processing constitutes an important tool to reach efficient approaches that make it possible to tackle complex problems within reasonable computing times, in this paper, alternatives for the cooperation of subpopulations in multi‐objective evolutionary algorithms have been identified and classified, and several procedures have been implemented and evaluated on some synthetic and Brain–Computer Interface datasets. The results show different improvements achieved in the solution quality and speedups, depending on the cooperation alternative and dataset. We show alternatives that even provide superlinear speedups with only small reductions in the solution quality, besides another cooperation alternative that improves the quality of the solutions with speedups similar to, or only slightly higher than, the speedup obtained by the parallel fitness evaluation in a master‐worker implementation (the alternative used as reference that behaves as the corresponding sequential multi‐objective approach). Copyright


international conference on cluster computing | 2014

Feature selection in high-dimensional EEG data by parallel multi-objective optimization

Dragi Kimovski; Julio Ortega; Andrés Ortiz; Raul Baños

Feature selection is required in many applications that involve high dimensional model building or classification problems. Many bioinformatics applications belong to this type. Recently, some approaches for supervised and unsupervised feature selection as a multi-objective optimization problem have been proposed. As the performance of unsupervised classification is evaluated through the quality of the obtained groups or clusters in the data set to be classified, it is difficult to define a suitable objective function that drives the selection of the features. Thus, several evaluation measures, and thus multi-objective clustering characterization, could provide a suitable set of features for unsupervised classification. In this paper, we consider the parallel implementation of a multi-objective feature selection that makes it possible to apply it to complex classification problems such as those having many features to select, and specifically high-dimensional data sets with much more features than data items. In this paper, we propose master-worker implementations of two different parallel evolutionary models, the parallel computation of the cost functions for the individuals in the population, and the parallel execution of evolutionary multi-objective procedures on subpopulations. The experiments accomplished on different benchmarks, including some related with feature selection in classification of EEG (Electroencephalogram) signals for BCI (Brain Computer Interface) applications, show the benefits of parallel processing not only for decreasing the running time, but also for improving the solution quality.


parallel, distributed and network-based processing | 2017

Use Cases towards a Decentralized Repository for Transparent and Efficient Virtual Machine Operations

Radu Prodan; Thomas Fahringer; Dragi Kimovski; Gabor Kecskemeti; Attila Csaba Marosi; Vlado Stankovski; Jonathan Becedas; Jose Julio Ramos; Craig Sheridan; Darren Whigham; Carlos Rodrigo Rubia Marcos

Virtualization is a key enabling technology in Cloud computing that allows users to run multiple virtual machines (VMs) with their own application environment on top of physical hardware. It permits scaling up and down of applications by elastic on-demand provisioning of VMs in response to their variable load to achieve increased utilization efficiency at a lower operational cost, while guaranteeing the desired level of Quality of Service (QoS) to the end-users. Typically, VMs are created using provider-specific templates that are stored in proprietary repositories, leading to provider lock-in and hampering portability or simultaneous usage of multiple federated Clouds. In this context, optimization at the level of the virtual machine image is needed both by the applications and by the underlying Cloud providers for improved resource usage, operational costs, elasticity, storage use, and other desired QoS-related features. To overcome those issues, the ENTICE project researches and creates a novel VM repository and operational environment for federated Cloud infrastructures. There exists a large variety of industrial applications that can strongly benefit by the ENTICE environment. In this paper we present an interesting selection of complementary use cases that drive the definition of the essential requirements for the ENTICE environment, and more importantly, validate the introduced innovations.


semantics knowledge and grid | 2016

Using Constraint-Based Reasoning for Multi-objective Optimisation of the ENTICE Environment

Sandi Gec; Dragi Kimovski; Radu Prodan; Vlado Stankovski

ENTICE is a set of innovative software services currently being developed to facilitate efficient operations of distributed Virtual Machine and container images (VMI/CI) repositories. Its operation necessitates various decision making for which a solver for Multi-Objective Optimisation (MOO) problems is used. However, the solver is a bottleneck due to its computational complexity. In order to be able to reduce the search space for the solver, we have developed an ontology and corresponding Knowledge Base (KB) that underpins the operation of the ENTICE environment. The Knowledge Base is developed based on the Jena Fuseki technology. To address the problem of computational complexity, constraint based queries and different reasoning mechanisms are applied. The Knowledge Base services are then integrated with other ENTICE services including the MOO solver. It is shown that this approach significantly reduces the computational complexity for the MOO, thus it shortens the optimisation time, and makes it possible to use the MOO for both strategic (decisions that can be made up to one day in advance) and dynamic (decisions requiring response within one minute) decision making possible.


Concurrency and Computation: Practice and Experience | 2018

Distributed Environment for Efficient Virtual Machine Image Management in Federated Cloud Architectures

Dragi Kimovski; Attila Csaba Marosi; Sandi Gec; Nishant Saurabh; Attila Kertesz; Gabor Kecskemeti; Vlado Stankovski; Radu Prodan

The use of virtual machines (VMs) in Cloud computing provides various benefits in the overall software engineering lifecycle. These include efficient elasticity mechanisms resulting in higher resource utilization and lower operational costs. The VMs as software artifacts are created using provider‐specific templates, called virtual machine images (VMI), and are stored in proprietary or public repositories for further use. However, some technology‐specific choices can limit the interoperability among various Cloud providers and bundle the VMIs with nonessential or redundant software packages, leading to increased storage size, prolonged VMI delivery, stagnant VMI instantiation, and ultimately vendor lock‐in. To address these challenges, we present a set of novel functionalities and design approaches for efficient operation of distributed VMI repositories, specifically tailored for enabling (1) simplified creation of lightweight and size optimized VMIs tuned for specific application requirements; (2) multi‐objective VMI repository optimization; and (3) efficient reasoning mechanism to help optimizing complex VMI operations. The evaluation results confirm that the presented approaches can enable VMI size reduction by up to 55%, while trimming the image creation time by 66%. Furthermore, the repository optimization algorithms can reduce the VMI delivery time by up to 51% and cut down the storage expenses by 3%. Moreover, by implementing replication strategies, the optimization algorithms can increase the system reliability by 74%.


international work-conference on artificial and natural neural networks | 2017

A Parallel Island Approach to Multiobjective Feature Selection for Brain-Computer Interfaces

Julio Ortega; Dragi Kimovski; John Q. Gan; Andrés Ortiz; Miguel Damas

This paper shows that parallel processing is useful for feature selection in brain-computer interfacing (BCI) tasks. The classification problems arising in such application usually involve a relatively small number of high-dimensional patterns and, as curse of dimensionality issues have to be taken into account, feature selection is an important requirement to build suitable classifiers. As the number of features defining the search space is high, the distribution of the searching space among different processors would contribute to find better solutions, requiring similar or even smaller amount of execution time than sequential counterpart procedures. We have implemented a parallel evolutionary multiobjective optimization procedure for feature selection, based on the island model, in which the individuals are distributed among different subpopulations that independently evolve and interchange individuals after a given number of generations. The experimental results show improvements in both computing time and quality of EEG classification with features extracted by multiresolution analysis (MRA), an approach widely used in the BCI field with useful properties for both temporal and spectral signal analysis.


Concurrency and Computation: Practice and Experience | 2017

Semantic approach for multi‐objective optimisation of the ENTICE distributed Virtual Machine and container images repository

Sandi Gec; Dragi Kimovski; Uroš Paščinski; Radu Prodan; Vlado Stankovski

New software engineering technologies facilitate development of applications from reusable software components, such as Virtual Machine and container images (VMI/CIs). Key requirements for the storage of VMI/CIs in public or private repositories are their fast delivery and cloud deployment times. ENTICE is a federated storage facility for VMI/CIs that provides optimisation mechanisms through the use of fragmentation and replication of images and a Pareto Multi‐Objective Optimisation (MO) solver. The operation of the MO solver is, however, time‐consuming due to the size and complexity of the metadata, specifying various non‐functional requirements for the management of VMI/CIs, such as geolocation, operational cost, and delivery time. In this work, we address this problem with a new semantic approach, which uses an ontology of the federated ENTICE repository, knowledge base, and constraint‐based reasoning mechanism. Open Source technologies such as Protégé, Jena Fuseki, and Pellet were used to develop a solution. Two specific use cases, (1) repository optimisation with offline and (2) online redistribution of VMI/CIs, are presented in detail. In both use cases, data from the knowledge base are provided to the MO solver. It is shown that Pellet‐based reasoning can be used to reduce the input metadata size used in the optimisation process by taking into consideration the geographic location of the VMI/CIs and the provenance of the VMI fragments. It is shown that this process leads to reduction of the input metadata size for the MO solver by up to 60% and reduction of the total optimisation time of the MO solver by up to 68%, while fully preserving the quality of the solution, which is significant.


International Journal of Parallel, Emergent and Distributed Systems | 2018

A new model for cloud elastic services efficiency

Sasko Ristov; Roland Mathá; Dragi Kimovski; Radu Prodan; Marjan Gusev

ABSTRACT The speedup measures the improvement in performance when the computational resources are being scaled. The efficiency, on the other side, provides the ratio between the achieved speedup and the number of scaled computational resources (processors). Both parameters (speedup and efficiency), which are defined according to Amdahl’s Law, provide very important information about performance of a computer system with scaled resources compared with a computer system with a single processor. However, as cloud elastic services’ load is variable, apart of the scaled resources, it is vital to analyse the load in order to determine which system is more effective and efficient. Unfortunately, both the speedup and efficiency are not sufficient enough for proper modeling of cloud elastic services, as the assumptions for both the speedup and efficiency are that the system’s resources are scaled, while the load is constant. In this paper, we extend the scaling of resources and define two additional scaled systems by (i) scaling the load and (ii) scaling both the load and resources. We introduce a model to determine the efficiency for each scaled system, which can be used to compare the efficiencies of all scaled systems, regardless if they are scaled in terms of load or resources. We have evaluated the model by using Windows Azure and the experimental results confirm the theoretical analysis. Although one can argue that web services are scalable and comply with Gustafson’s Law only, we provide a taxonomy that classifies scaled systems based on the compliance with both the Amdahl’s and Gustafson’s laws. For three different scaled systems (scaled resources R, scaled load L and combination RL), we introduce a model to determine the scaling efficiency. Our model extends the current definition of efficiency according to Amdahl’s Law, which assumes scaling the resources, and not the load. GRAPHICAL ABSTRACT


ieee acm international symposium cluster cloud and grid computing | 2017

A Two-Stage Multi-Objective Optimization of Erasure Coding in Overlay Networks

Nishant Saurabh; Dragi Kimovski; Francesco Gaetano; Radu Prodan

In the recent years, overlay networks have emergedas a crucial platform for deployment of various distributed applications. Many of these applications rely on data redundancy techniques, such as erasure coding, to achieve higher fault tolerance. However, erasure coding applied in large scale overlay networksentails various overheads in terms of storage, latency and datarebuilding costs. These overheads are largely attributed to theselected erasure coding scheme and the encoded chunk placementin the overlay network. This paper explores a multi-objective optimization approach for identifying appropriate erasure codingschemes and encoded chunk placement in overlay networks. Theuniqueness of our approach lies in the consideration of multipleerasure coding objectives such as encoding rate and redundancyfactor, with overlay network performance characteristics likestorage consumption, latency and system reliability. Our approach enables a variety of tradeoff solutions with respect tothese objectives to be identified in the form of a Pareto front. To solve this problem, we propose a novel two stage multi-objective evolutionary algorithm, where the first stage determinesthe optimal set of encoding schemes, while the second stageoptimizes placement of the corresponding encoded data chunksin overlay networks of varying sizes. We study the performanceof our method by generating and analyzing the Pareto optimalsets of tradeoff solutions. Experimental results demonstrate thatthe Pareto optimal set produced by our multi-objective approachincludes and even dominates the chunk placements delivered bya related state-of-the-art weighted sum method.

Collaboration


Dive into the Dragi Kimovski's collaboration.

Top Co-Authors

Avatar

Radu Prodan

University of Innsbruck

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandi Gec

University of Ljubljana

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabor Kecskemeti

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge