Eligius M. T. Hendrix
University of Málaga
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eligius M. T. Hendrix.
European Journal of Operational Research | 2002
J.K. Gigler; Eligius M. T. Hendrix; R. A. Heesen; V. G. W. van den Hazelkamp; G. Meerdink
Abstract A methodology for optimisation of agri chains by dynamic programming (DP) is presented which explicitly deals with the appearance and quality of products. In agri chains, a product can be characterised by two types of states, namely appearance and quality states. Appearance states are influenced by handling actions. Quality states are influenced by processing, transportation and storage actions. The concept of chain optimisation by DP is elaborated. Chain optimisation refers to the construction of optimal routes defining which actors in the chain should perform which actions at which process conditions at minimum integral costs. Models describing quality development of a product as a function of the process conditions can be included into the DP methodology. The DP methodology has been implemented into a software tool and is illustrated with a case for an agri chain of willow biomass fuel to an energy plant.
Biomass & Bioenergy | 1999
J.K. Gigler; G. Meerdink; Eligius M. T. Hendrix
Abstract The main objective of this study was to develop minimum cost supply strategies for willow to energy plants (two plant sizes: 0.5 and 30 MWe, two energy conversion technologies: combustion and gasification). Time span between harvest and energy conversion varied from 1 to 12 months. For a realistic comparison, different supply chains were based on the same initial characteristics (i.e., moisture content 50% wb at harvest) and final fuel specifications at the energy plant (moisture content 20% wb, particle size chips or chunks). Cost calculations were based on the integral cost calculation method and were presented for all process steps. The main conclusion was that the time span between harvest and energy conversion and the size and conversion technology of the energy plant largely influence the design of the supply chain and consequently the supply costs. The fuel supply costs ranged from 17.6 to 26.1 ECU/t DM (where DM stands for oven dry matter) or 0.010 to 0.023 ECU/kWh. The cost reduction which could be achieved by choosing the minimum cost chain design could be as high as 45% or 14.4 ECU/t DM. Generally, the strategy of minimum costs for supply of fuel to an energy plant running all year round on willow was as follows: • for farmers who should supply their willow within 6 months after harvest: harvest as chips, forced drying at the farm and transport (if necessary); • for farmers who should supply their willow beyond 6 months after harvest: harvest as chunks or stems, natural drying near the willow field, transport (if necessary) and central chipping (if applicable).
Ecological Modelling | 1994
Olivier Klepper; Eligius M. T. Hendrix
Abstract This paper is concerned with the inverse problem in ecological modelling: how to update information (if any) on parameter uncertainty by using the information on the actual system. In the probabilistic (Bayesian) context this implies estimating the posterior probability distribution of the parameters on the basis of prior information and the probability distribution of measurements. In other contexts the procedure may result in a set or a possibility distribution (fuzzy representation) in parameter space. Although the solution to the inverse problem is apparently straightforward, to be practical it requires either fairly restrictive assumptions or it makes large computational demands. The paper presents an algorithm to solve the inverse problem in various contexts which is generally applicable and computationally efficient. The method is illustrated on various test functions and an actual case study. For calibration problems with a moderate number of dimensions (or: higher-dimensional problems that can be reduced to these) the new algorithm provides a robust and relatively efficient way to characterize posterior parameter distributions or sets. The algorithm requires a number of function evaluations proportional to the logarithm of the search volume, while for conventional random search this number increases proportional to search volume itself. For inherently high dimensional problems the present approach is still relatively efficient, but slow in absolute numbers of function evaluations.
IEEE Transactions on Geoscience and Remote Sensing | 2012
Eligius M. T. Hendrix; Inmaculada García; Javier Plaza; Gabriel Martín; Antonio Plaza
Spectral unmixing is an important technique for hyperspectral data exploitation, in which a mixed spectral signature is decomposed into a collection of spectrally pure constituent spectra, called endmembers, and a set of correspondent fractions, or abundances, that indicate the proportion of each endmember present in the mixture. Over the last years, several algorithms have been developed for automatic or semiautomatic endmember extraction. Some available approaches assume that the input data set contains at least one pure spectral signature for each distinct material and further conduct a search for the most spectrally pure signatures in the high-dimensional space spanned by the hyperspectral data. Among these approaches, those aimed at maximizing the volume of the simplex that can be formed using available spectral signatures have found wide acceptance. However, the presence of spectrally pure constituents is unlikely in remotely sensed hyperspectral scenes due to spatial resolution, mixing phenomena, and other considerations. In order to address this issue, other available algorithms have been developed to generate virtual endmembers (not necessarily present among the input data samples) by finding the simplex with minimum volume that encloses all available observations. In this paper, we discuss maximum-volume versus minimum-volume enclosing solutions and further develop a novel algorithm in the latter category which incorporates the fractional abundance estimation as an internal step of the endmember searching process (i.e., it does not require an external method to produce endmember fractional abundances). The method is based on iteratively enclosing the observations in a lower dimensional space and removing observations that are most likely not to be enclosed by the simplex of the endmembers to be estimated. The performance of the algorithm is investigated and compared to that of other algorithms (with and without the pure pixel assumption) using synthetic and real hyperspectral data sets collected by a variety of hyperspectral imaging instruments.
Journal of Global Optimization | 2001
Eligius M. T. Hendrix; Pilar Martínez Ortigosa; Inmaculada García
Controlled Random Search (CRS) is a simple population based algorithm which despite its attractiveness for practical use, has never been very popular among researchers on Global Optimization due to the difficulties in analysing the algorithm. In this paper, a framework to study the behaviour of algorithms in general is presented and embedded into the context of our view on questions in Global Optimization. By using as a reference a theoretical ideal algorithm called N-points Pure Adaptive Search (NPAS) some new analytical results provide bounds on speed of convergence and the Success Rate of CRS in the limit once it has settled down into simple behaviour. To relate the performance of the algorithm to characteristics of functions to be optimized, constructed simple test functions, called extreme cases, are used.
European Journal of Operational Research | 1996
Eligius M. T. Hendrix; Carmen J. Mecking; Theo H.B. Hendriks
Abstract A mathematical description is given of robust solutions in the context of a product design problem, formulated as finding a point in a feasible area defined by inequalities. Finding the “most interior” or “most robust” point in this area appears to be strongly related to the maximization of weighted slacks. The problem in which the inequalities are linear is analyzed.
OR Spectrum | 2009
M. Elena Sáiz; Eligius M. T. Hendrix; José Fernández; Blas Pelegrín
Modelling the location decision of two competing firms that intend to build a new facility in a planar market can be done by a Huff-like Stackelberg location problem. In a Huff-like model, the market share captured by a firm is given by a gravity model determined by distance calculations to facilities. In a Stackelberg model, the leader is the firm that locates first and takes into account the actions of the competing chain (follower) locating a new facility after the leader. The follower problem is known to be a hard global optimisation problem. The leader problem is even harder, since the leader has to decide on location given the optimal action of the follower. So far, in literature only heuristic approaches have been tested to solve the leader problem. Our research question is to solve the leader problem rigorously in the sense of having a guarantee on the reached accuracy. To answer this question, we develop a branch-and-bound approach. Essentially, the bounding is based on the zero sum concept: what is gain for one chain is loss for the other. We also discuss several ways of creating bounds for the underlying (follower) sub-problems, and show their performance for numerical cases.
Ecological Modelling | 2000
D. Makowski; Eligius M. T. Hendrix; M.K. van Ittersum; W.A.H. Rossing
Abstract Nearly optimal solutions of linear programming models provide useful information when some of the relevant objectives and constraints are not explicitized in the models. This paper presents a three steps framework to study nearly optimal solutions of linear programming models developed for land use exploration. The first step is to define low dimensional vectors called ‘aspects’ to summarize the solutions. The second step is to generate a group of optimal and of nearly optimal solutions. Three methods are proposed for generating nearly optimal solutions. Method i proceeds by minimization of sums of decision variables that are non-zero in the optimal solution and in previously generated nearly optimal solutions. Method ii proceeds by maximization of sums of randomly selected decision variables. Method iii is targeted at searching nearly optimal solutions with very different values for the aspects. Finally, the third step of the framework is to present graphically the values of the aspects of the generated solutions. The framework is illustrated with a case study in which a linear programming model developed for land use exploration at the European level is presented. First, an optimal solution is calculated with the model by minimizing nitrogen loss with constraints on area, water use, product balances, and manure balances. Then, 52 nearly optimal solutions are generated by using methods i , ii , and iii with a deviation tolerance of 5% from the optimal value of nitrogen loss. Each solution is summarized by three different aspects that represent the allocations of the agricultural area among two regions, among five types of crop rotation, and among five production orientations respectively. Graphical presentation of these aspects and principal component analysis show that nearly optimal solutions can be very different from the optimal solution in terms of land use allocations. For example, the agricultural area allocated to the north of the European Community varies from 10.9 to 50.2×10 6 ha among the 52 generated nearly optimal solutions, whereas this area is equal to 26.5×10 6 ha in the optimal solution. The comparison of methods i , ii , and iii shows that the solutions generated with method iii are quite more contrasted than the solutions generated with methods i and ii . The case study presented in this paper illustrates how our methodological framework can be used to allow a stakeholder to select a satisfactory solution according to issues that cannot be quantified in a model.
Optimization Methods & Software | 2008
Leocadio G. Casado; Martínez Ja; Inmaculada García; Eligius M. T. Hendrix
The focus of this paper is on the analysis and evaluation of a type of parallel strategies applied to the algorithm Advanced Multidimensional Interval analysis Global Optimization (AMIGO). We investigate two parallel versions of AMIGO, called Parallel AMIGO (PAMIGO) algorithm, Global-PAMIGO and Local-PAMIGO. The idea behind our study is that in order to exploit the potential parallelism of algorithms, researchers need to adapt them to the target computer architectures. Our PAMIGO algorithms have been designed for shared memory architectures and are based on a threaded programming model, which is suitable to be run on current personal computers with multicore processors. Our first experimental results show a promising speed-up up to four process units. We analyse the loss of efficiency when the number of process units is greater than four by obtaining a profile of the algorithm executions. Secondly we experiment with the use of a local memory allocator per thread. This increases the efficiency by reducing the number of lock conflicts given by the standard system memory allocator. Our experimental results for both PAMIGO versions, using up to 15 process units, obtain a good performance for hard to solve problems on unicore and multicore processors. It is noteworthy that both versions of PAMIGO obtain a similar performance. Our experiments may be useful for researchers who use parallel B&B algorithms.
Journal of Global Optimization | 2005
Bill Baritompa; Eligius M. T. Hendrix
This discussion paper for the SGO 2001 Workshop considers the process of investigating stochastic global optimization algorithms. It outlines a general plan for the systematic study of their behavior. It raises questions about performance criteria, characteristics of test cases and classification of algorithms.