Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kornilios Kourtis is active.

Publication


Featured researches published by Kornilios Kourtis.


Nucleic Acids Research | 2009

DIANA-microT web server: elucidating microRNA functions through target prediction

Manolis Maragkakis; Martin Reczko; Victor A. Simossis; Panagiotis Alexiou; Giorgos L. Papadopoulos; Theodore Dalamagas; Giorgos Giannopoulos; Georgios I. Goumas; Evangelos Koukis; Kornilios Kourtis; Thanasis Vergoulis; Nectarios Koziris; Timos K. Sellis; Panayotis Tsanakas; Artemis G. Hatzigeorgiou

Computational microRNA (miRNA) target prediction is one of the key means for deciphering the role of miRNAs in development and disease. Here, we present the DIANA-microT web server as the user interface to the DIANA-microT 3.0 miRNA target prediction algorithm. The web server provides extensive information for predicted miRNA:target gene interactions with a user-friendly interface, providing extensive connectivity to online biological resources. Target gene and miRNA functions may be elucidated through automated bibliographic searches and functional information is accessible through Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways. The web server offers links to nomenclature, sequence and protein databases, and users are facilitated by being able to search for targeted genes using different nomenclatures or functional features, such as the genes possible involvement in biological pathways. The target prediction algorithm supports parameters calculated individually for each miRNA:target gene interaction and provides a signal-to-noise ratio and a precision score that helps in the evaluation of the significance of the predicted results. Using a set of miRNA targets recently identified through the pSILAC method, the performance of several computational target prediction programs was assessed. DIANA-microT 3.0 achieved there with 66% the highest ratio of correctly predicted targets over all predicted targets. The DIANA-microT web server is freely available at www.microrna.gr/microT.


BMC Bioinformatics | 2009

Accurate microRNA target prediction correlates with protein repression levels.

Manolis Maragkakis; Panagiotis Alexiou; Giorgos L. Papadopoulos; Martin Reczko; Theodore Dalamagas; Giorgos Giannopoulos; George I. Goumas; Evangelos Koukis; Kornilios Kourtis; Victor A. Simossis; Praveen Sethupathy; Thanasis Vergoulis; Nectarios Koziris; Timos K. Sellis; Panayotis Tsanakas; Artemis G. Hatzigeorgiou

BackgroundMicroRNAs are small endogenously expressed non-coding RNA molecules that regulate target gene expression through translation repression or messenger RNA degradation. MicroRNA regulation is performed through pairing of the microRNA to sites in the messenger RNA of protein coding genes. Since experimental identification of miRNA target genes poses difficulties, computational microRNA target prediction is one of the key means in deciphering the role of microRNAs in development and disease.ResultsDIANA-microT 3.0 is an algorithm for microRNA target prediction which is based on several parameters calculated individually for each microRNA and combines conserved and non-conserved microRNA recognition elements into a final prediction score, which correlates with protein production fold change. Specifically, for each predicted interaction the program reports a signal to noise ratio and a precision score which can be used as an indication of the false positive rate of the prediction.ConclusionRecently, several computational target prediction programs were benchmarked based on a set of microRNA target genes identified by the pSILAC method. In this assessment DIANA-microT 3.0 was found to achieve the highest precision among the most widely used microRNA target prediction programs reaching approximately 66%. The DIANA-microT 3.0 prediction results are available online in a user friendly web server at http://www.microrna.gr/microT


The Journal of Supercomputing | 2009

Performance evaluation of the sparse matrix-vector multiplication on modern architectures

Georgios I. Goumas; Kornilios Kourtis; Nikos Anastopoulos; Vasileios Karakasis; Nectarios Koziris

In this paper, we revisit the performance issues of the widely used sparse matrix-vector multiplication (SpMxV) kernel on modern microarchitectures. Previous scientific work reports a number of different factors that may significantly reduce performance. However, the interaction of these factors with the underlying architectural characteristics is not clearly understood, a fact that may lead to misguided, and thus unsuccessful attempts for optimization. In order to gain an insight into the details of SpMxV performance, we conduct a suite of experiments on a rich set of matrices for three different commodity hardware platforms. In addition, we investigate the parallel version of the kernel and report on the corresponding performance results and their relation to each architecture’s specific multithreaded configuration. Based on our experiments, we extract useful conclusions that can serve as guidelines for the optimization process of both single and multithreaded versions of the kernel.


computing frontiers | 2008

Optimizing sparse matrix-vector multiplication using index and value compression

Kornilios Kourtis; Georgios I. Goumas; Nectarios Koziris

Previous research work has identified memory bandwidth as the main bottleneck of the ubiquitous Sparse Matrix-Vector Multiplication kernel. To attack this problem, we aim at reducing the overall data volume of the algorithm. Typical sparse matrix representation schemes store only the non-zero elements of the matrix and employ additional indexing information to properly iterate over these elements. In this paper we propose two distinct compression methods targeting index and numerical values respectively. We perform a set of experiments on a large real-world matrix set and demonstrate that the index compression method can be applied successfully to a wide range of matrices. Moreover, the value compression method is able to achieve impressive speedups in a more limited yet important class of sparse matrices that contain a small number of distinct values


parallel, distributed and network-based processing | 2008

Understanding the Performance of Sparse Matrix-Vector Multiplication

Georgios I. Goumas; Kornilios Kourtis; Nikos Anastopoulos; Vasileios Karakasis; Nectarios Koziris

In this paper we revisit the performance issues of the widely used sparse matrix-vector multiplication (SpMxV) kernel on modern microarchitectures. Previous scientific work reports a number of different factors that may significantly reduce performance. However, the interaction of these factors with the underlying architectural characteristics is not clearly understood, a fact that may lead to misguided and thus unsuccessful attempts for optimization. In order to gain an insight on the details of SpMxV performance, we conduct a suite of experiments on a rich set of matrices for three different commodity hardware platforms. Based on our experiments we extract useful conclusions that can serve as guidelines for the subsequent optimization process of the kernel.


acm sigplan symposium on principles and practice of parallel programming | 2011

CSX: an extended compression format for spmv on shared memory systems

Kornilios Kourtis; Vasileios Karakasis; Georgios I. Goumas; Nectarios Koziris

The Sparse Matrix-Vector multiplication (SpMV) kernel scales poorly on shared memory systems with multiple processing units due to the streaming nature of its data access pattern. Previous research has demonstrated that an effective strategy to improve the kernels performance is to drastically reduce the data volume involved in the computations. Since the storage formats for sparse matrices include metadata describing the structure of non-zero elements within the matrix, we propose a generalized approach to compress metadata by exploiting substructures within the matrix. We call the proposed storage format Compressed Sparse eXtended (CSX). In our implementation we employ runtime code generation to construct specialized SpMV routines for each matrix. Experimental evaluation on two shared memory systems for 15 sparse matrices demonstrates significant performance gains as the number of participating cores increases. Regarding the cost of CSX construction, we propose several strategies which trade performance for preprocessing cost making CSX applicable both to online and offline preprocessing.


international conference on parallel processing | 2008

Improving the Performance of Multithreaded Sparse Matrix-Vector Multiplication Using Index and Value Compression

Kornilios Kourtis; Georgios I. Goumas; Nectarios Koziris

The sparse matrix-vector multiplication kernel exhibits limited potential for taking advantage of modern shared memory architectures due to its large memory bandwidth requirements. To decrease memory contention and improve the performance of the kernel we propose two compression schemes. The first, called CSR-DU, targets the reduction of the matrix structural data by applying coarse grain delta encoding for the column indices. The second scheme, called CSR-VI, targets the reduction of the numerical values using indirect indexing and can only be applied to matrices which contain a small number of unique values. Evaluation of both methods on a rich matrix set showed that they can significantly improve the performance of the multithreaded version of the kernel and achieve good scalability for large matrices.


The Journal of Supercomputing | 2008

Exploring the performance limits of simultaneous multithreading for memory intensive applications

Evangelia Athanasaki; Nikos Anastopoulos; Kornilios Kourtis; Nectarios Koziris

Abstract Simultaneous multithreading (SMT) has been proposed to improve system throughput by overlapping instructions from multiple threads on a single wide-issue processor. Recent studies have demonstrated that diversity of simultaneously executed applications can bring up significant performance gains due to SMT. However, the speedup of a single application that is parallelized into multiple threads, is often sensitive to its inherent instruction level parallelism (ILP), as well as the efficiency of synchronization and communication mechanisms between its separate, but possibly dependent threads. Moreover, as these separate threads tend to put pressure on the same architectural resources, no significant speedup can be observed. In this paper, we evaluate and contrast thread-level parallelism (TLP) and speculative precomputation (SPR) techniques for a series of memory intensive codes executed on a specific SMT processor implementation. We explore the performance limits by evaluating the tradeoffs between ILP and TLP for various kinds of instruction streams. By obtaining knowledge on how such streams interact when executed simultaneously on the processor, and quantifying their presence within each application’s threads, we try to interpret the observed performance for each application when parallelized according to the aforementioned techniques. In order to amplify this evaluation process, we also present results gathered from the performance monitoring hardware of the processor.


ACM Transactions on Architecture and Code Optimization | 2010

Exploiting compression opportunities to improve SpMxV performance on shared memory systems

Kornilios Kourtis; Georgios I. Goumas; Nectarios Koziris

The Sparse Matrix-Vector Multiplication (SpMxV) kernel exhibits poor scaling on shared memory systems, due to the streaming nature of its data access pattern. To decrease memory contention and improve kernel performance we propose two compression schemes: CSR-DU, that targets the reduction of the matrix structural data by applying coarse-grained delta-encoding, and CSR-VI, that targets the reduction of the values using indirect indexing, applicable to matrices with a small number of unique values. Thorough experimental evaluation of the proposed methods and their combination, on two modern shared memory systems, demonstrated that they can significantly improve multithreaded SpMxV performance upon standard and state-of-the-art approaches.


international conference on parallel architectures and compilation techniques | 2014

LCA: a memory link and cache-aware co-scheduling approach for CMPs

Alexandros-Herodotos Haritatos; Georgios I. Goumas; Nikos Anastopoulos; Konstantinos Nikas; Kornilios Kourtis; Nectarios Koziris

This paper presents LCA, a memory Link and Cache-Aware co-scheduling approach for CMPs. It is based on a novel application classification scheme that monitors resource utilization across the entire memory hierarchy from main memory down to CPU cores. This enables us to predict application interference accurately and support a co-scheduling algorithm that outperforms state-of-the-art scheduling policies both in terms of throughput and fairness. As LCA depends on information collected at runtime by existing monitoring mechanisms of modern processors, it can be easily incorporated in real-life co-scheduling scenarios with various application features and platform configurations.

Collaboration


Dive into the Kornilios Kourtis's collaboration.

Top Co-Authors

Avatar

Nectarios Koziris

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Georgios I. Goumas

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Nikos Anastopoulos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Evangelia Athanasaki

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Vasileios Karakasis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evangelos Koukis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Giorgos L. Papadopoulos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Martin Reczko

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Panayotis Tsanakas

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge