Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dimitris Margaritis is active.

Publication


Featured researches published by Dimitris Margaritis.


international conference on data mining | 2005

Speculative Markov blanket discovery for optimal feature selection

Sandeep Yaramakala; Dimitris Margaritis

In this paper we address the problem of learning the Markov blanket of a quantity from data in an efficient manner Markov blanket discovery can be used in the feature selection problem to find an optimal set of features for classification tasks, and is a frequently-used preprocessing phase in data mining, especially for high-dimensional domains. Our contribution is a novel algorithm for the induction of Markov blankets from data, called Fast-IAMB, that employs a heuristic to quickly recover the Markov blanket. Empirical results show that Fast-IAMB performs in many cases faster and more reliably than existing algorithms without adversely affecting the accuracy of the recovered Markov blankets.


Journal of Artificial Intelligence Research | 2009

Efficient Markov network structure discovery using independence tests

Facundo Bromberg; Dimitris Margaritis; Vasant G. Honavar

We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent with the results of these tests. Until very recently, algorithms for structure learning were based on maximum likelihood estimation, which has been proved to be NP-hard for Markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. The independence-based approach does not require the computation of the likelihood, and thus both GSMN* and GSIMN can compute the structure efficiently (as shown in our experiments). GSMN* is an adaptation of the Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearls well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the performance of statistical tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle theorem, also introduced in this work, which is a simplified version of the set of Markov axioms. Experimental comparisons on artificial and real-world data sets show GSIMN can yield significant savings with respect to GSMN*, while generating a Markov network with comparable or in some cases improved quality. We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH, that produces all possible conditional independences resulting from repeatedly applying Pearls theorems on the known conditional independence tests. The results of this comparison show that GSIMN, by the sole use of the Triangle theorem, is nearly optimal in terms of the set of independences tests that it infers.


foundations of computer science | 1995

Reconstructing strings from substrings in rounds

Dimitris Margaritis; Steven Skiena

We establish a variety of combinatorial bounds on the tradeoffs inherent in reconstructing strings using few rounds of a given number of substring queries per round. These results lead us to propose a new approach to sequencing by hybridization (SBH), which uses interaction to dramatically reduce the number of oligonucleotides used for de novo sequencing of large DNA fragments, while preserving the parallelism which is the primary advantage of SBH.


computational intelligence | 2009

EFFICIENT MARKOV NETWORK DISCOVERY USING PARTICLE FILTERS

Dimitris Margaritis; Facundo Bromberg

In this paper, we introduce an efficient independence‐based algorithm for the induction of the Markov network (MN) structure of a domain from the outcomes of independence test conducted on data. Our algorithm utilizes a particle filter (sequential Monte Carlo) method to maintain a population of MN structures that represent the posterior probability distribution over structures, given the outcomes of the tests performed. This enables us to select, at each step, the maximally informative test to conduct next from a pool of candidates according to information gain, which minimizes the cost of the statistical tests conducted on data. This makes our approach useful in domains where independence tests are expensive, such as cases of very large data sets and/or distributed data. In addition, our method maintains multiple candidate structures weighed by posterior probability, which allows flexibility in the presence of potential errors in the test outcomes.


Proceedings of the international conference on Information and communications technologies in tourism | 1994

MaTourA: multi-agent tourist advisor

Constantin Halatsis; Panagiotis Stamatopoulos; Isambo Karali; Constantin Mourlas; Dimitris Gouscos; Dimitris Margaritis; Constantin Fouskakis; Angelos Kolokouris; Panagiotis Xinos; Mike Reeve; André Véron; Kees Schuerman; Liang-Liang Li

MaTourA is a tourist advisory system about Greece that is being implemented in the parallel constraint logic programming language ElipSys. The purpose of MaTourA is to facilitate the work carried out in travel agencies by providing an interactive way to construct personalized tours, select predefined package tours and handle the underlying touristic information. The system has been designed as a set of high-level interacting agents. In this direction, the ElipSys language was extended with the appropriate features to support the development of multi-agent systems.


acm symposium on applied computing | 1994

Extending a parallel CLP language to support the development of multi-agent systems

Panagiotis Stamatopoulos; Dimitris Margaritis; Constantin Halatsis

An extension of the parallel constraint logic programming language ElipSys is presented. This extension is directed towards the development of multi-agent systems which have to deal with large combinatorial problems that are distributed in nature. Problems of this kind, after being decomposed into subproblems, may be tackled efficiently by individual agents using ElipSys’ powerful mechanisms, such as parallelism and constraint satisfaction techniques. The proposed extension supports the communication requirements of the agents, in order to have them cooperate and solve the original combinatorially intensive problem. The communication scheme among the agents is viewed as a three-layered model. The first layer is socket oriented, the second realizes a blackboard architecture and the third supports virtual point-topoint interaction among the agents.


neural information processing systems | 1999

Bayesian Network Induction via Local Neighborhoods

Dimitris Margaritis; Sebastian Thrun


Archive | 2003

Learning Bayesian Network Model Structure from Data

Dimitris Margaritis


national conference on artificial intelligence | 2005

Distribution-free learning of Bayesian network structure in continuous domains

Dimitris Margaritis


Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL'99) | 1999

Defining image content with multiple regions-of-interest

Baback Moghaddam; Henning Biermann; Dimitris Margaritis

Collaboration


Dive into the Dimitris Margaritis's collaboration.

Top Co-Authors

Avatar

Facundo Bromberg

National Scientific and Technical Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vasant G. Honavar

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Constantin Halatsis

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar

Panagiotis Stamatopoulos

National and Kapodistrian University of Athens

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge