Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javier G. Marín-Blázquez is active.

Publication


Featured researches published by Javier G. Marín-Blázquez.


IEEE Transactions on Fuzzy Systems | 2002

From approximative to descriptive fuzzy classifiers

Javier G. Marín-Blázquez; Qiang Shen

This paper presents an effective and efficient approach for translating fuzzy classification rules that use approximative sets to rules that use descriptive sets and linguistic hedges of predefined meaning. It works by first generating rules that use approximative sets from training data, and then translating the resulting approximative rules into descriptive ones. Hedges that are useful for supporting such translations are provided. The translated rules are functionally equivalent to the original approximative ones, or a close equivalent given search time restrictions, while reflecting their underlying preconceived meaning. Thus, fuzzy, descriptive classifiers can be obtained by taking advantage of any existing approach to approximative modeling, which is generally efficient and accurate, while employing rules that are comprehensible to human users. Experimental results are provided and comparisons to alternative approaches given.


Information Sciences | 2001

Approximative fuzzy rules approaches for classification with hybrid-GA techniques

Antonio Fernandez Gomez-skarmeta; Mercedes Valdés; Fernando Jiménez; Javier G. Marín-Blázquez

Abstract In this paper the use of different methods from the fuzzy modeling field for classification tasks is evaluated and the potential of their integration in producing better classification results is investigated. The methods considered, approximative in their nature, consider different integrations of techniques with an initial rule generation step and a following rule tuning approach using different evolutionary algorithms. We analyse the adaptation of existing techniques in the fuzzy modeling context for the classification problem, and the integration of these techniques in order to improve the classifiers performance. Finally a genetic algorithm (GA) for translation from approximative rules to similar descriptive ones trying to preserve the accuracy of the approximative classifier is presented. The classical Iris and Cancer data set are used throughout the evaluation process to form a common ground for comparison and performance analysis.


IWLCS'03-05 Proceedings of the 2003-2005 international conference on Learning classifier systems | 2007

A hyper-heuristic framework with XCS: learning to create novel problem-solving algorithms constructed from simpler algorithmic ingredients

Javier G. Marín-Blázquez; Sonia Schulenburg

Evolutionary Algorithms (EAs) have been successfully reported by academics in a wide variety of commercial areas. However, from a commercial point of view, the story appears somewhat different; the number of success stories does not appear to be as significant as those reported by academics. For instance, Heuristic Algorithms (HA) are still very widely used to tackle practical problems in operations research, where many of these are NP-hard and exhaustive search is often computationally intractable. There are a number of logical reasons why practitioners do not embark so easily in the development and use of EAs. This work is concerned with a new line of research based on bringing together these two approaches in a harmonious way. The idea is that instead of using an EA to learn the solution of a specific problem, use it to find an algorithm, i.e. a solution process that can solve well a large family of problems by making use of familiar heuristics. The work of the authors is novel in two ways: within the Learning Classifier Systems (LCS) current body of research, it represents the first attempt to tackle the Bin Packing problem (BP), a different kind of problem to those already studied by the LCS community, and from the Hyper-Heuristics (HH) framework, it represents the first use of LCS as the learning paradigm. Several reward schema based on single or multiple step environments are studied in this paper, tested on a very large set of BP problems and a small set of widely used HAs. Results of the approach are encouraging, showing outperformance over all HAs used individually and over previously reported work by the authors, including non-LCS (a GA based approach used for the same BP set of problems) and LCS (using single step environments). Several findings and future lines of work are also outlined.


soft computing | 2008

Intrusion detection using a linguistic hedged fuzzy-XCS classifier system

Javier G. Marín-Blázquez; Gregorio Martínez Pérez

Intrusion detection systems (IDS) are a fundamental defence component in the architecture of the current telecommunication systems. Misuse detection is one of the different approaches to create IDS. It is based on the automatic generation of detection rules from labelled examples. Such examples are either attacks or normal situations. From this perspective the problem can be viewed as a supervised classification one. In this sense, this paper proposes the use of XCS as a classification technique to aid in the tasks of misuse detection in IDS systems. The final proposed XCS variant includes the use of hedged linguistic fuzzy classifiers to allow for interpretability. The use of this linguistic fuzzy approach provides with both the possibility of testing human designed detectors and a posteriori human fine tuning of the models obtained. To evaluate the performance not only several classic classification problems as Wine or Breast Cancer datasets are considered, but also a problem based on real data, the KDD-99. This latter problem, the KDD-99, is a classic in the literature of intrusion systems. It shows that with simple configurations the proposed variant obtains competitive results compared with other techniques shown in the recent literature. It also generates human interpretable knowledge, something very appreciated by security experts. In fact, this effort is integrated into a global detection architecture, where the security administrator is guiding part of the intrusion detection (and prevention) process.


ieee international conference on fuzzy systems | 2000

From approximative to descriptive models

Javier G. Marín-Blázquez; Qiang Shen; A.F. Gomez-Skarmeta

Presents a technique for translating rules that use approximative sets to rules that use descriptive sets and linguistic hedges of predefined meaning. The translated descriptive rules will be functionally equivalent to the original approximative ones, or the closest equivalence possible, while reflecting their underlying semantics. Thus, descriptive models can take advantage of any existing approach to approximative modelling which is generally efficient and accurate, whilst employing rules that are comprehensible to human users.


computer and information technology | 2010

Linguistic Fuzzy Logic Enhancement of a Trust Mechanism for Distributed Networks

Félix Gómez Mármol; Javier G. Marín-Blázquez; Gregorio Martínez Pérez

Trust is, in some cases, being considered as a requirement in highly distributed communication scenarios. Before accessing a particular service, a trust model is then being used in these scenarios to determine if the service provider can be trusted or not. It is done usually on behalf of the final user or service customer, and with little intervention of him/her. This is usually happening with the main aim of automatizing the process, but also because trust models are normally making use of reasoning mechanisms and models difficult to understand by humans. In this paper we propose the adaptation of a bio-inspired trust model to deal with linguistic fuzzy labels, which are closer to the human way of thinking. This Linguistic Fuzzy Trust Model also uses fuzzy reasoning. Results show that the new model keeps the accuracy of the underlying bio-inspired trust model and the level of client satisfaction, while enhancing the interpretability of the model.


Concurrency and Computation: Practice and Experience | 2012

LFTM, linguistic fuzzy trust mechanism for distributed networks

Félix Gómez Mármol; Javier G. Marín-Blázquez; Gregorio Martínez Pérez

Trust is, in some cases, being considered as a requirement in highly distributed communication scenarios. Before accessing a particular service, a trust model is then being used in these scenarios to determine if the service provider can be trusted or not. It is done usually on behalf of the final user or service customer, and with a little intervention of him or her. This is usually happening with the main aim of automatizing the process and because trust models are normally making use of reasoning mechanisms and models difficult to understand by humans. In this paper, we propose the adaptation of a bio‐inspired trust model to deal with linguistic fuzzy labels, which are closer to the human way of thinking. This Linguistic Fuzzy Trust Model also uses fuzzy reasoning. Results show that the new model keeps the accuracy of the underlying bio‐inspired trust model and the level of client satisfaction, while enhancing the interpretability of the model and thus making it closer to the final user. Copyright


ieee international conference on fuzzy systems | 2001

Linguistic hedges on trapezoidal fuzzy sets: a revisit

Javier G. Marín-Blázquez; Qiang Shen

Trapezoidal fuzzy sets (including triangular ones) are most commonly used in fuzzy modelling for computational simplicity. Applications of conventional linguistic hedges over such sets, however, often fail to result in significant changes of the sets definition. The paper presents an improved version of more effective hedges specifically devised for trapezoidal fuzzy sets, to be applied to dilate or concentrate a given set by expanding or shrinking its constituent parts. It also introduces three new hedges, named UPPER, LOWER and MID. Simulation results are provided to demonstrate the advantages of utilising the revised and newly introduced hedges for assisting fuzzy modelling, in comparison to the use of conventional ones.


genetic and evolutionary computation conference | 2006

Multi-step environment learning classifier systems applied to hyper-heuristics

Javier G. Marín-Blázquez; Sonia Schulenburg

Heuristic Algorithms (HA) are very widely used to tackle practical problems in operations research. They are simple, easy to understand and inspire confidence. Many of these HAs are good for some problem instances while very poor for other cases. While Meta-Heuristics try to find which is the best heuristic and/or parameters to apply for a given problem instance Hyper-Heuristics (HH) try to combine several heuristics in the same solution searching process, switching among them whenever the circumstances vary. Besides, instead to solve a single problem instance it tries to find a general algorithm to apply to whole families of problems. HH use evolutionary methods to search for such a problem-solving algorithm and, once produced, to apply it to any new problem instance desired. Learning Classifier Systems (LCS), and in particular XCS, represents an elegant and simple way to try to fabricate such a composite algorithm. This represents a different kind of problem to those already studied by the LCS community. Previous work, using single step environments, already showed the usefulness of the approach. This paper goes further and studies the novel use of multi-step environments for HH and an alternate way to consider states to see if chains of actions can be learnt. A non-trivial, NP-hard family of problems, the Bin Packing one, is used as benchmark for the procedure. Results of the approach are very encouraging, showing outperformance over all HAs used individually and over previously reported work by the authors, including non-LCS (a GA based approach used for the same BP set of problems) and LCS (using single step environments).


ieee international conference on fuzzy systems | 2002

Microtuning of membership functions: accuracy vs. interpretability

Qiang Shen; Javier G. Marín-Blázquez

A major disadvantage of existing methods for tuning descriptive fuzzy models is that the usual constrains over the changes on the fuzzy membership functions do not guarantee that no radical changes in the definitions and hence, no unacceptable disruptions in the interpretability of the original model would take place. This paper proposes a new tuning method, called microtuning, which avoids drastic changes by enforcing that the possible loss in interpretability is kept to minimal. This is achieved by ensuring the modified sets to have, at least, a given degree of similarity with their original. The paper focuses on the issue of how accuracy increases as the similarity constraint is relaxed. It reveals the tradeoff between losing interpretability and gaining precision in tuning a descriptive model. Simulation results show that most of the improvement in model accuracy can be obtained without major changes in the original set definitions, microtuning may be all what is required.

Collaboration


Dive into the Javier G. Marín-Blázquez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qiang Shen

Aberystwyth University

View shared research outputs
Top Co-Authors

Avatar

Sonia Schulenburg

Royal Society of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emma Hart

Edinburgh Napier University

View shared research outputs
Top Co-Authors

Avatar

Peter Ross

Edinburgh Napier University

View shared research outputs
Researchain Logo
Decentralizing Knowledge