Panagiotis Antonellis
University of Patras
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Panagiotis Antonellis.
International Journal of Software Engineering & Applications | 2010
Yiannis Kanellopoulos; Panagiotis Antonellis; Dimitris Antoniou; Christos Makris; Evangelos Theodoridis; Christos Tjortjis; Nikos Tsirakis
This work proposes a methodology for source code quality and static behaviour evaluation of a software system, based on the standard ISO/IEC-9126. It uses elements automatically derived from source code enhanced with expert knowledge in the form of quality characteristic rankings, allowing software engineers to assign weights to source code attributes. It is flexible in terms of the set of metrics and source code attributes employed, even in terms of the ISO/IEC-9126 characteristics to be assessed. We applied the methodology to two case studies, involving five open source and one proprietary system. Results demonstrated that the methodology can capture software quality trends and express expert perceptions concerning system quality in a quantitative and systematic manner.
acm symposium on applied computing | 2008
Panagiotis Antonellis; Christos Makris; Nikos Tsirakis
In this paper we propose a unified clustering algorithm for both homogeneous and heterogeneous XML documents. Depending on the type of the XML documents, the proposed algorithm modifies its distance metric in order to properly adapt to the special structural characteristics of homogeneous and heterogeneous XML documents. We compare the quality of the formed clusters with those of one of the latest XML clustering algorithms and show that our algorithm outperforms it in the case of both homogeneous and heterogeneous XML documents.
international conference on tools with artificial intelligence | 2007
Yiannis Kanellopoulos; Panagiotis Antonellis; Christos Tjortjis; Christos Makris
Clustering is particularly useful in problems where there is little prior information about the data under analysis. This is usually the case when attempting to evaluate a software systems maintainability, as many dimensions must be taken into account in order to reach a conclusion. On the other hand partitional clustering algorithms suffer from being sensitive to noise and to the initial partitioning. In this paper we propose a novel partitional clustering algorithm, k-Attractors. It employs the maximal frequent itemset discovery and partitioning in order to define the number of desired clusters and the initial cluster attractors. Then it utilizes a similarity measure which is adapted to the way initial attractors are determined. We apply the k-Attractors algorithm to two custom industrial systems and we compare it with WEKA s implementation of K-Means. We present preliminary results that show our approach is better in terms of clustering accuracy and speed.
Information Processing Letters | 2009
Panagiotis Antonellis; Christos Makris; Nikos Tsirakis
Clustering is a classic problem in the machine learning and pattern recognition area, however a few complications arise when we try to transfer proposed solutions in the data stream model. Recently there have been proposed new algorithms for the basic clustering problem for massive data sets that produce an approximate solution using efficiently the memory, which is the most critical resource for streaming computation. In this paper, based on these solutions, we present a new model for clustering clickstream data which applies three different phases in the data processing, and is validated through a set of experiments.
International Journal of Web Engineering and Technology | 2008
Panagiotis Antonellis; Christos Makris
Information-filtering systems constitute a critical component of modern information-seeking applications. As the number of users grows and the amount of information available becomes even bigger, it is imperative to employ scalable and efficient representation and filtering techniques. Typically, the use of eXtensible Markup Language (XML) representation entails profile representation with the use of the XPath query language and the employment of efficient heuristic techniques for constraining the complexity of the filtering mechanism. In this paper, we propose an efficient technique for matching user profiles that is based on the use of holistic twig-matching algorithms and is more effective, in terms of time and space complexities, in comparison with previous techniques. The proposed algorithm is able to handle order matching of user profiles, while its main positive aspect is the envisaging of a representation based on Prufer sequences that permits the effective investigation of node relationships. Experimental results showed that the proposed algorithm outperforms the previous algorithms in XML filtering both in space and time aspects.
database and expert systems applications | 2009
Panagiotis Antonellis; Christos Makris; Nikos Tsirakis
Peer-to-Peer (P2P) data integration combines the P2P infrastructure with traditional scheme-based data integration techniques. Some of the primary problems in this research area are the techniques to be used for querying, indexing and distributing documents among peers in a network especially when document files are in XML format. In order to handle this problem we describe an XML P2P system that efficiently distributes a set of clustered XML documents in a P2P network in order to speed-up user queries. The novelty of the proposed system lies in the efficient distribution of the XML documents and the construction of an appropriate virtual index on top of the network peers.
conference on software maintenance and reengineering | 2009
Panagiotis Antonellis; Dimitris Antoniou; Yiannis Kanellopoulos; Christos Makris; Christos Tjortjis; Vangelis Theodoridis; Nikos Tsirakis
The aim of the Code4Thought project was to deliver a tool supported methodology that would facilitate the evaluation of a software products quality according toISO/IEC-9126 software engineering quality standard. It was a joint collaboration between Dynacomp S.A. and the Laboratory for Graphics, Multimedia and GIS of the Department of Computer Engineering and Informatics of the University of Patras. The Code4thought project focused its research on extending the ISO/IEC-9126standard by employing additional metrics and developing new methods for facilitating system evaluators to define their own set of evaluation attributes. Furthermore, to develop innovative and platform-free methods for the extraction of elements and metrics from source code data.Finally, to design and implement new data mining algorithms tailored for the analysis of software engineering data.
database and expert systems applications | 2008
Panagiotis Antonellis; Christos Makris
Information filtering systems constitute a critical component in modern information seeking applications. As the number of users grows and the information available becomes even bigger it is crucial to employ scalable and efficient representation and filtering techniques. In this paper we propose an innovative XML filtering system that utilizes clustering of user profiles in order to reduce the filtering space and achieves sub-linear filtering time. The proposed system employs a unique sequence representation for user profiles and XML documents based on the depth-first traversal of the XML tree and an appropriate distance metric in order to compare and cluster the user profiles and filter the incoming XML documents. Experimental results depict that the proposed system outperforms the previous approaches in XML filtering and achieves sub-linear filtering time.
Journal of Computational Physics | 2017
Yannis Kallinderis; Eleni M. Lymperopoulou; Panagiotis Antonellis
Abstract Adaptive grid refinement/coarsening is an important method for achieving increased accuracy of flow simulations with reduced computing resources. Further, flow visualization of complex 3-D fields is a major task of both computational fluid dynamics (CFD), as well as experimental data analysis. A primary issue of adaptive simulations and flow visualization is the reliable detection of the local regions containing features of interest. A relatively wide spectrum of detection functions (sensors) is employed for representative flow cases which include boundary layers, vortices, jets, wakes, shock waves, contact discontinuities, and expansions. The focus is on relatively simple sensors based on local flow field variation using 3-D general hybrid grids consisting of multiple types of elements. A quantitative approach for sensors evaluation and comparison is proposed and applied. It is accomplished via the employment of analytic flow fields. Automation and effectiveness of an adaptive grid or flow visualization process requires the reliable determination of an appropriate threshold for the sensor. Statistical evaluation of the distributions of the sensors results in a proposed empirical formula for the threshold. The qualified sensors along with the automatic threshold determination are tested with more complex flow cases exhibiting multiple flow features.
Applied Artificial Intelligence | 2011
Yiannis Kanellopoulos; Panagiotis Antonellis; Christos Tjortjis; Christos Makris; Nikos Tsirakis
Clustering is a data analysis technique, particularly useful when there are many dimensions and little prior information about the data. Partitional clustering algorithms are efficient but suffer from sensitivity to the initial partition and noise. We propose here k-attractors, a partitional clustering algorithm tailored to numeric data analysis. As a preprocessing (initialization) step, it uses maximal frequent item-set discovery and partitioning to define the number of clusters k and the initial cluster “attractors.” During its main phase the algorithm uses a distance measure, which is adapted with high precision to the way initial attractors are determined. We applied k-attractors as well as k-means, EM, and FarthestFirst clustering algorithms to several datasets and compared results. Comparison favored k-attractors in terms of convergence speed and cluster formation quality in most cases, as it outperforms these three algorithms except from cases of datasets with very small cardinality containing only a few frequent item sets. On the downside, its initialization phase adds an overhead that can be deemed acceptable only when it contributes significantly to the algorithms accuracy.