Christos Dimou
Aristotle University of Thessaloniki
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Christos Dimou.
Expert Systems With Applications | 2008
Alexandros Batzios; Christos Dimou; Andreas L. Symeonidis; Pericles A. Mitkas
Web crawling has become an important aspect of web search, as the WWW keeps getting bigger and search engines strive to index the most important and up to date content. Many experimental approaches exist, but few actually try to model the current behaviour of search engines, which is to crawl and refresh the sites they deem as important, much more frequently than others. BioCrawler mirrors this behaviour on the semantic web, by applying the learning strategies adopted in previous work on ecosystem simulation, called BioTope. BioCrawler employs the principles of BioTopes intelligent agents on the semantic web, learns which sites are rich in semantic content and which sites link to them and adjusts its crawling habits accordingly. In the end, it learns to behave much like the state of the art search engine crawlers do. However, BioCrawler reaches that behavior solely by exploiting on-page factors, rather than off-page factors, such as the currently used link popularity.
Expert Systems With Applications | 2009
Christos Dimou; Andreas L. Symeonidis; Pericles A. Mitkas
Driven by the urging need to thoroughly identify and accentuate the merits of agent technology, we present in this paper, MEANDER, an integrated framework for evaluating the performance of agent-based systems. The proposed framework is based on the Agent Performance Evaluation (APE) methodology, which provides guidelines and representation tools for performance metrics, measurement collection and aggregation of measurements. MEANDER comprises a series of integrated software components that implement and automate various parts of the methodology and assist evaluators in their tasks. The main objective of MEANDER is to integrate performance evaluation processes into the entire development lifecycle, while clearly separating any evaluation-specific code from the application code at hand. In this paper, we describe in detail the architecture and functionality of the MEANDER components and test its applicability to an existing multi-agent system.
international conference on integration of knowledge intensive multi-agent systems | 2007
Christos Dimou; Andreas L. Symeonidis; Pericles A. Mitkas
As agent technology (AT) becomes a well-established engineering field of computing, the need for generalized, standardized methodologies for agent evaluation is imperative. Despite the plethora of available development tools and theories that researchers in agent computing have access to, there is a remarkable lack of general metrics, tools, benchmarks and experimental methods for formal validation and comparison of existing or newly developed systems. It is argued that AT has reached a certain degree of maturity, and it is therefore feasible to move from ad-hoc, domain-specific evaluation methods to standardized, repeatable and easily verifiable procedures. In this paper, we outline a first attempt towards a generic evaluation methodology for MAS performance. Instead of following the research path towards defining more powerful mathematical description tools for determining intelligence and performance metrics, we adopt an engineering point of view to the problem of deploying a methodology that is both implementation and domain independent. The proposed methodology consists of a concise set of steps, novel theoretical representation tools and appropriate software tools that assist evaluators in selecting the appropriate metrics, undertaking measurement and aggregation techniques for the system at hand
autonomous and intelligent systems | 2007
Christos Dimou; Andreas L. Symeonidis; Pericles A. Mitkas
As modern applications tend to stretch between large, ever-growing datasets and increasing demand for meaningful content at the user end, more elaborate and sophisticated knowledge extraction technologies are needed. Towards this direction, the inherently contradicting technologies of deductive software agents and inductive data mining have been integrated, in order to address knowledge intensive problems. However, there exists no generalized evaluation methodology for assessing the efficiency of such applications. On the one hand, existing data mining evaluation methods focus only on algorithmic precision, ignoring overall system performance issues. On the other hand, existing systems evaluation techniques are insufficient, as the emergent intelligent behavior of agents introduce unpredictable factors of performance. In this paper, we present a generalized methodology for performance evaluation of intelligent agents that employ knowledge models produced through data mining. The proposed methodology consists of concise steps for selecting appropriate metrics, defining measurement methodologies and aggregating the measured performance indicators into thorough system characterizations. The paper concludes with a demonstration of the proposed methodology to a real world application, in the Supply Chain Management domain.
web intelligence | 2006
Christos Dimou; Alexandros Batzios; Andreas L. Symeonidis; Pericles A. Mitkas
Although search engines traditionally use spiders for traversing and indexing the Web, there has not yet been any methodological attempt to model, deploy and test learning spiders. The flourishing of the semantic Web provides understandable information that may improve the accuracy of search engines. In this paper, we introduce BioSpider, an agent-based simulation framework for developing and testing autonomous, intelligent, semantically-focused Web spiders. BioSpider assumes a direct analogy of the problem at hand with a multi-variate ecosystem, where each member is self-maintaining. The population of the ecosystem comprises cooperative spiders incorporating communication, mobility and learning skills, striving to improve efficiency. Genetic algorithms and classifier rules have been employed for spider adaptation and learning. A set of experiments has been performed in order to qualitatively test the efficacy and applicability of the proposed approach
web intelligence | 2008
Christos Dimou; Manolis Falelakis; Andreas L. Symeonidis; Anastasios Delopoulos; Pericles A. Mitkas
The field of multi-agent systems has reached a significant degree of maturity with respect to frameworks, standards and infrastructures. Focus is now shifted to performance evaluation of real-world applications, in order to quantify the practical benefits and drawbacks of agent systems. Our approach extends current work on generic evaluation methodologies for agents by employing fuzzy weighted trees for organizing evaluation-specific concepts/metrics and linguistic terms to intuitively represent and aggregate measurement information.Furthermore, we introduce meta-metrics that measure the validity and complexity of the contribution of each metric in the overall performance evaluation. These are all incorporated for selecting optimal subsets of metrics and designing the evaluation process incompliance with the demands/restrictions of various evaluation setups, thus minimizing intervention by domain experts. The applicability of the proposed methodology is demonstrated through the evaluation of a real-world test case.
agents and data mining interaction | 2014
Christos Dimou; Fani A. Tzima; Andreas L. Symeonidis; Pericles A. Mitkas
Despite the plethora of frameworks and tools for developing agent systems, there is a remarkable lack of generalised methodologies for assessing their performance. Existing methods in the field of software engineering do not adequately address the unpredictable and complex nature of intelligent agents. We introduce a generic methodology for evaluating agent performance; the Agent Performance Evaluation (APE) methodology consists of representation tools, guidelines and techniques for organizing, categorizing and using metrics, measurements and aggregated characterizations of agent performance. The core of APE is the Metrics Representation Tree, a generic structure that enables efficient manipulation of evaluation-specific information. This paper provides a formal specification of the proposed methodology in Z notation and demonstrates how to apply it on an existing multi-agent system.
performance metrics for intelligent systems | 2007
Christos Dimou; Andreas L. Symeonidis; Pericles A. Mitkas
Although the need for well-established engineering approaches in Intelligent Systems (IS) performance evaluation is urging, currently no widely accepted methodology exists, mainly due to lack of consensus on relevant definitions and scope of applicability, multi-disciplinary issues and immaturity of the field of IS. Even existing well-tested evaluation methodologies applied in other domains, such as (traditional) software engineering, prove inadequate to address the unpredictable emerging factors of the behavior of intelligent components. In this paper, we present a generic methodology and associated tools for evaluating the performance of IS, by exploiting the software agent paradigm as a representative modeling concept for intelligent systems. Based on the assessment of observable behavior of agents or multi-agent systems, the proposed methodology provides a concise set of guidelines and representation tools for evaluators to use. The methodology comprises three main tasks, namely metrics selection, monitoring agent activities for appropriate measurements, and aggregation of the conducted measurements. Coupled to this methodology is the Evaluator Agent Framework, which aims at the automation of most of the provided steps of the methodology, by providing Graphical User Interfaces for metrics organization and results presentation, as well as a code generating module that produces a skeleton of a monitoring agent. Once this agent is completed with domain-specific code, it is appended to the runtime of a multi-agent system and collects information from observable events and messages. Both the evaluation methodology and the automation framework are tested and demonstrated in Symbiosis, a MAS simulation environment for competing groups of autonomous entities.
knowledge discovery and data mining | 2008
Christos Dimou; Andreas L. Symeonidis; Pericles A. Mitkas
international conference on intelligent systems | 2009
Christos Dimou; Andreas L. Symeonidis; Fani A. Tzima; Periklis Mitkas