Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan L. Mateo is active.

Publication


Featured researches published by Juan L. Mateo.


Expert Systems With Applications | 2009

Finding out general tendencies in speckle noise reduction in ultrasound images

Juan L. Mateo; Antonio Fernández-Caballero

This article investigates and compiles some of the techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images. With this information, a comparison of all the methods studied is done based on an experiment, using quality metrics to test their performance and show the benefits each one can contribute. To test the methods, a synthetic, noise-free image of a kidney is created and later simulations using Field II program to corrupt it are performed. This way, the smoothing techniques can be compared using numeric metrics, taking the noise-free image as a reference. Since real ultrasound images are already noise corrupted images and real noise-free images do not exist, conventional metrics cannot be used to indicate the quality obtained with filtering. Nevertheless, we propose the use of the tendencies observed in our study in real images.


Data Mining and Knowledge Discovery | 2011

Learning Bayesian networks by hill climbing: efficient methods based on progressive restriction of the neighborhood

José A. Gámez; Juan L. Mateo; José Miguel Puerta

Learning Bayesian networks is known to be an NP-hard problem and that is the reason why the application of a heuristic search has proven advantageous in many domains. This learning approach is computationally efficient and, even though it does not guarantee an optimal result, many previous studies have shown that it obtains very good solutions. Hill climbing algorithms are particularly popular because of their good trade-off between computational demands and the quality of the models learned. In spite of this efficiency, when it comes to dealing with high-dimensional datasets, these algorithms can be improved upon, and this is the goal of this paper. Thus, we present an approach to improve hill climbing algorithms based on dynamically restricting the candidate solutions to be evaluated during the search process. This proposal, dynamic restriction, is new because other studies available in the literature about restricted search in the literature are based on two stages rather than only one as it is presented here. In addition to the aforementioned advantages of hill climbing algorithms, we show that under certain conditions the model they return is a minimal I-map of the joint probability distribution underlying the training data, which is a nice theoretical property with practical implications. In this paper we provided theoretical results that guarantee that, under these same conditions, the proposed algorithms also output a minimal I-map. Furthermore, we experimentally test the proposed algorithms over a set of different domains, some of them quite large (up to 800 variables), in order to study their behavior in practice.


international work conference on the interplay between natural and artificial computation | 2007

EDNA: Estimation of Dependency Networks Algorithm

José A. Gámez; Juan L. Mateo; José Miguel Puerta

In this work we present a new proposal in order to model the probability distribution in the estimation of distribution algorithms. This approach is based on using dependency networks [1] instead of Bayesian networks or simpler models in which structure is limited. Dependency networks are probabilistic graphical models similar to Bayesian networks, but with a significant difference: they allow directed cycles in the graph. This difference can be an important advantage because of two main reasons. First, in some real problems cyclic relationships appear between variables an this fact cannot be represented in a Bayesian network. Secondly, dependency networks can be built easily due to the fact that there is no need to check the existence of cycles as in a Bayesian network. In this paper we propose to use a general (multivariate) model in order to deal with a richer representation, however, in this initial approach to the problem we also propose to constraint the construction phase in order to use only bivariate statistics. The algorithm is compared with classical approaches with the same complexity order, i.e. bivariate models as chains and trees.


european conference on symbolic and quantitative approaches to reasoning and uncertainty | 2007

A Fast Hill-Climbing Algorithm for Bayesian Networks Structure Learning

José A. Gámez; Juan L. Mateo; José Miguel Puerta

In the score plus search based Bayesian networks structure learning approach, the most used method is hill climbing (HC), because its implementation is good trade-off between CPU requirements, accuracy of the obtained model, and ease of implementation. Because of these features and to the fact that HC with the classical operators guarantees to obtain a minimal I-map, this approach is really appropriate to deal with high dimensional domains. In this paper we revisited a previously developed HC algorithm (termed constrained HC, or CHC in short) that takes advantage of some scoring metrics properties in order to restrict during the search the parent set of each node. The main drawback of CHC is that there is no warranty of obtaining a minimal I-map, and so the algorithm includes a second stage in which an unconstrained HC is launched by taking as initial solution the one returned by the constrained search stage. In this paper we modify CHC in order to guarantee that its output is a minimal I-map and so the second stage is not needed. In this way we save a considerable amount of CPU time, making the algorithm best suited for high dimensional datasets. A proof is provided about the minimal I-map condition of the returned network, and also computational experiments are reported to show the gain with respect to CPU requirements.


congress on evolutionary computation | 2009

Avoiding premature convergence in estimation of distribution algorithms

Luis delaOssa; José A. Gámez; Juan L. Mateo; José Miguel Puerta

This work studies the problem of premature convergence due to the lack of diversity in Estimation of Distributions Algorithms. This problem is quite important for these kind of algorithms since, even when using very complex probabilistic models, they can not solve certain optimization problems such as some deceptive, hierarchical or multimodal ones.


genetic and evolutionary computation conference | 2008

Improved EDNA (estimation of dependency networks algorithm) using combining function with bivariate probability distributions

José A. Gámez; Juan L. Mateo; José Miguel Puerta

One of the key points in Estimation of Distribution Algorithms (EDAs) is the learning of the probabilistic graphical model used to guide the search: the richer the model the more complex the learning task. Dependency networks-based EDAs have been recently introduced. On the contrary of Bayesian networks, dependency networks allow the presence of directed cycles in their structure. In a previous work the authors proposed EDNA, an EDA algorithm in which a multivariate dependency network is used but approximating its structure learning by considering only bivariate statistics. EDNA was compared with other models from the literature with the same computational complexity (e.g., univariate and bivariate models). In this work we propose a modified version of EDNA in which not only the structural learning phase is limited to bivariate statistics, but also the simulation and the parameter learning task. Now, we extend the comparison employing multivariate models based on Bayesian networks (EBNA and hBOA). Our experiments show that the modified EDNA is more accurate than the original one, being its accuracy comparable to EBNA and hBOA, but with the advantage of being faster specially in the more complex cases.


biomedical engineering and informatics | 2008

Methodological Approach to Reducing Speckle Noise in Ultrasound Images

Antonio Fernández-Caballero; Juan L. Mateo

This article investigates and compiles some of the techniques mostly used in the smoothing or suppression of speckle noise in ultrasound images. With this information, a comparison of all the methods studied is done based on an experiment, using quality metrics to test their performance and show the benefits each one can contribute. To test the methods, a synthetic, noise-free image is created and later applied well-known noise models to corrupt it. This way, the smoothing techniques can be compared using numeric metrics, taking the noise-free image as a reference. Since real ultrasound images are already noise corrupted images and real noise-free images do not exist, conventional metrics cannot be used to indicate the quality obtained with filtering. Nevertheless, we propose the use of the tendencies observed in our study in real images.


Progress in Artificial Intelligence | 2012

One iteration CHC algorithm for learning Bayesian networks: an effective and efficient algorithm for high dimensional problems

José A. Gámez; Juan L. Mateo; José Miguel Puerta

It is well known that learning Bayesian networks from data is an NP-hard problem. For this reason, usually metaheuristics or approximate algorithms have been used to provide a good solution. In particular, the family of hill climbing algorithms has a key role in this scenario because of its good trade-off between computational demand and the quality of the learned models. In addition, these algorithms have several good theoretical properties. In spite of these characteristics of quality and efficiency, when it comes to dealing with high-dimensional datasets, they can be improved upon, and this is the goal of this paper. Recent papers have tackled this problem, usually by dividing the learning task into two or more iterations or phases. The first phase aims to constrain the search space, and, once the space is pruned, the second one consists of a (local) search in this constrained search space. Normally, the first iteration is the one with the highest computational complexity. One such algorithm is constrained hill climbing (CHC), which in its initial iteration not only progressively constrains the search space, but also learns good quality Bayesian networks. A second iteration, or even more, is used in order to improve these networks and also to ensure the good theoretical properties exhibited by the classical hill climbing algorithm. In this latter algorithm we can see that the first iteration is extremely fast when compared to similar algorithms, but the performance decays over the rest of the iterations with respect to the saved CPU time. In this paper, we present an improvement on this CHC algorithm, in which, to put it, briefly, we avoid the last iteration while still obtaining the same theoretical properties. Furthermore, we experimentally test the proposed algorithms over a set of different domains, some of them quite large (more than 1,000 variables), in order to study their behavior in practice.


web information systems engineering | 2007

Improving revisitation browsers capability by using a dynamic bookmarks personal toolbar

José A. Gámez; Juan L. Mateo; José Miguel Puerta

In this paper we present a new approach to add intelligence to Internet browsers user interface. Our contribution is based on improving browsers revisitation capabilities by learning a model from users navigation behaviour, that later is used to predict a set of bookmarks likely to be used next. These set of bookmarks must be a list of moderate size (≥ 10) because our goal is to show them in the browser bookmarks personal toolbar. We think that dealing with this part of the user interface is beneficial for revisitation because it is always visible and on the contrary to history or bookmarks list (tree) the user can access the desired web page by using a single mouse click. In this work we focus on performing the comparison of several (computationally) simple classifiers in order to identify a good candidate to be used as user navigation model. From the experiments carried out we identify that a combination of Naive Bayes with OneR could be a good choice.


international conference on adaptive and natural computing algorithms | 2007

Learning Bayesian Classifiers from Dependency Network Classifiers

José A. Gámez; Juan L. Mateo; José Miguel Puerta

In this paper we propose a new method for learning Bayesian network classifiers in an indirect way instead of directly from data. This new model is a classifier based on dependency networks[1] that is a probabilistic graphical model similar to Bayesian networks but in which directed cycles are allowed. The benefits from doing things in this way are that learning process for dependency networks can be easier and simpler than learning Bayesian networks, with the direct consequence that learning algorithms could have good properties about scalability. We show that it is possible to take advantage of this facility to get Bayesian networks classifiers without losing quality in classification.

Collaboration


Dive into the Juan L. Mateo's collaboration.

Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge