Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pier Luca Lanzi is active.

Publication


Featured researches published by Pier Luca Lanzi.


genetic and evolutionary computation conference | 2006

Prediction update algorithms for XCSF: RLS, Kalman filter, and gain adaptation

Pier Luca Lanzi; Daniele Loiacono; Stewart W. Wilson; David E. Goldberg

We study how different prediction update algorithms influence the performance of XCSF. We consider three classical parameter estimation algorithms (NLMS, RLS, and Kalman filter) and four gain adaptation algorithms (K1, K2, IDBD, and IDD). The latter have been shown to perform comparably to the best algorithms (RLS and Kalman), but they have a lower complexity. We apply these algorithms to update classifier prediction in XCSF and compare the performances of the seven versions of XCSF on a set of real functions. Our results show that the best known algorithms still perform best: XCSF with RLS and XCSF with Kalman perform significantly better than the others. In contrast, when added to XCSF, gain adaptation algorithms perform comparably to NLMS, the simplest estimation algorithm, the same used in the original XCSF. Nevertheless, algorithms that perform similarly generalize differently. For instance: XCSF with Kalman filter evolves more compact solutions than XCSF with RLS and gain adaptation algorithms allow better generalization than NLMS.


genetic and evolutionary computation conference | 2005

Extending XCSF beyond linear approximation

Pier Luca Lanzi; Daniele Loiacono; Stewart W. Wilson; David E. Goldberg

XCSF is the extension of XCS in which classifier prediction is computed as a linear combination of classifier inputs and a weight vector associated to each classifier. XCSF can exploit classifiers computable prediction to evolve accurate piecewise linear approximations of functions. In this paper, we take XCSF one step further and show how XCSF can be easily extended to allow polynomial approximations. We test the extended version of XCSF on various approximation problems and show that quadratic/cubic approximations can be used to significantly improve XCSFs generalization capabilities.


genetic and evolutionary computation conference | 2004

Bounding Learning Time in XCS

Martin V. Butz; David E. Goldberg; Pier Luca Lanzi

It has been shown empirically that the XCS classifier system solves typical classification problems in a machine learning competitive way. However, until now, no learning time estimate has been derived analytically for the system. This paper introduces a time estimate that bounds the learning time of XCS until maximally accurate classifiers are found. We assume a domino convergence model in which each attribute is successively specialized to the correct value. It is shown that learning time in XCS scales polynomially in problem length and problem complexity and thus in a machine learning competitive way.


genetic and evolutionary computation conference | 2006

Classifier prediction based on tile coding

Pier Luca Lanzi; Daniele Loiacono; Stewart W. Wilson; David E. Goldberg

This paper introduces XCSF extended with tile coding prediction: each classifier implements a tile coding approximator; the genetic algorithm is used to adapt both classifier conditions (i.e., to partition the problem) and the parameters of each approximator; thus XCSF evolves an ensemble of tile coding approximators instead of the typical monolithic approximator used in reinforcement learning. The paper reports a comparison between (i) XCSF with tile coding prediction and (ii) plain tile coding. The results show that XCSF with tile coding always reaches optimal performance, it usually learns as fast as the best parametrized tile coding, and it can be faster than the typical tile coding setting. In addition, the analysis of the evolved tile coding ensembles shows that XCSF actually adapts local approximators following what is currently considered the best strategy to adapt the tile coding parameters in a given problem.


genetic and evolutionary computation conference | 2005

XCS with computed prediction in multistep environments

Pier Luca Lanzi; Daniele Loiacono; Stewart W. Wilson; David E. Goldberg

XCSF extends the typical concept of learning classifier systems through the introduction of computed classifier prediction. Initial results show that XCSFs computed prediction can be used to evolve accurate piecewise linear approximations of simple functions. In this paper, we take XCSF one step further and apply it to typical reinforcement learning problems involving delayed rewards. In essence, we use XCSF as a method of generalized (linear) reinforcement learning to evolve piecewise linear approximations of the payoff surfaces of typical multistep problems. Our results show that XCSF can easily evolve optimal and near optimal solutions for problems introduced in the literature to test linear reinforcement learning methods.


genetic and evolutionary computation conference | 2006

Using convex hulls to represent classifier conditions

Pier Luca Lanzi; Stewart W. Wilson

This papers presents a novel representation of classifier conditions based on convex hulls. A classifier condition is represented by a sets of points in the problem space. These points identify a convex hull that delineates a convex region in the problem space. The condition matches all the problem instances inside such region. XCSF with convex conditions is applied to function approximation problems and its performance is compared to that of XCSF with interval conditions. The comparison shows that XCSF with convex hulls converges faster than XCSF with interval conditions. However, convex conditions usually do not produce more compact solutions.


ieee international conference on evolutionary computation | 2006

XCSF with Neural Prediction

Pier Luca Lanzi; Daniele Loiacono

We extend XCSF with neural prediction and replace the linear prediction function used in XCSF with a feedforward multilayer neural network. Each classifier exploits a neural network to approximate the payoff surface associated to the target problem while the genetic algorithm adapts both classifier conditions, classifier actions, and the network structure. We compare XCSF with neural prediction to XCSF with linear prediction. Our results show that XCSF with neural prediction, XCSFNN, can outperform XCSF.


genetic and evolutionary computation conference | 2008

An analysis of matching in learning classifier systems

Martin V. Butz; Pier Luca Lanzi; Xavier Llorà; Daniele Loiacono

We investigate rule matching in learning classifier systems for problems involving binary and real inputs. We consider three rule encodings: the widely used character-based encoding, a specificity-based encoding, and a binary encoding used in Alecsys. We compare the performance of the three algorithms both on matching alone and on typical test problems. The results on matching alone show that the population generality influences the performance of the matching algorithms based on string representations in different ways. Character-based encoding becomes slower and slower as generality increases, specificity-based encoding becomes faster and faster as generality increases. The results on typical test problems show that the specificity-based representation can halve the time required for matching but also that binary encoding is about ten times faster on the most difficult problems. Moreover, we extend specificity-based encoding to real-inputs and propose an algorithm that can halve the time require for matching real inputs using an interval-based representation.


genetic and evolutionary computation conference | 2007

Modeling selection pressure in XCS for proportionate and tournament selection

Albert Orriols-Puig; Kumara Sastry; Pier Luca Lanzi; David E. Goldberg; Ester Bernadó-Mansilla

In this paper, we derive models of the selection pressure in XCS for proportionate (roulette wheel) selection and tournament selection. We show that these models can explain the empirical results that have been previously presented in the literature. We validate the models on simple problems showing that, (i) when the model assumptions hold, the theory perfectly matches the empirical evidence; (ii) when the model assumptions do not hold, the theory can still provide qualitative explanations of the experimental results.


Linkage in Evolutionary Computation | 2008

Real-Coded Extended Compact Genetic Algorithm Based on Mixtures of Models

Pier Luca Lanzi; Luigi Nichetti; Kumara Sastry; Davide Voltini; David E. Goldberg

This paper presents a real-coded estimation distribution algorithm (EDA) inspired to the extended compact genetic algorithm (ECGA) and the real-coded Bayesian Optimization Algorithm (rBOA). Like ECGA, the proposed algorithm partitions the problem variables into a set of clusters that are manipulated as independent variables and estimates the population distribution using marginal product models (MPMs); like rBOA, it employs finite mixtures of models and it does not use any sort of discretization. Accordingly, the proposed real-coded EDA can be either viewed as the extension of the ECGA to real-valued domains by means of finite mixture models or as a simplification of the real-coded BOA to the marginal product models (MPMs). The results reported here show that the number of evaluations required by the proposed algorithm scales sub-quadratically with the problem size in additively separable problems.

Collaboration


Dive into the Pier Luca Lanzi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John H. Holmes

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge