Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Constantinos Voglis is active.

Publication


Featured researches published by Constantinos Voglis.


Applied Mathematics and Computation | 2009

Towards Ideal Multistart. A stochastic approach for locating the minima of a continuous function inside a bounded domain

Constantinos Voglis; Isaac E. Lagaris

A stochastic global optimization method based on Multistart is presented. In this, the local search is conditionally applied with a probability that takes in account the topology of the objective function at the detail offered by the current status of exploration. As a result, the number of unnecessary local searches is drastically limited, yielding an efficient method. Results of its application on a set of common test functions are reported, along with a performance comparison against other established methods of similar nature.


Computer Physics Communications | 2012

MEMPSODE: A global optimization software based on hybridization of population-based algorithms and local searches☆

Constantinos Voglis; Konstantinos E. Parsopoulos; D.G. Papageorgiou; Isaac E. Lagaris; Michael N. Vrahatis

Article history: We present MEMPSODE, a global optimization software tool that integrates two prominent population- based stochastic algorithms, namely Particle Swarm Optimization and Differential Evolution, with well established efficient local search procedures made available via the Merlin optimization environment. The resulting hybrid algorithms, also referred to as Memetic Algorithms, combine the space exploration advantage of their global part with the efficiency asset of the local search, and as expected they have displayed a highly efficient behavior in solving diverse optimization problems. The proposed software is carefully parametrized so as to offer complete control to fully exploit the algorithmic virtues. It is accompanied by comprehensive examples and a large set of widely used test functions, including tough atomic cluster and protein conformation problems.


Computer Physics Communications | 2009

A numerical differentiation library exploiting parallel architectures

Constantinos Voglis; Panagiotis E. Hadjidoukas; Isaac E. Lagaris; D.G. Papageorgiou

We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O (h), O (h 2 ) ,a ndO (h 4 ), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores.


international conference on parallel processing | 2011

High-performance numerical optimization on multicore clusters

Panagiotis E. Hadjidoukas; Constantinos Voglis; Vassilios V. Dimakopoulos; Isaac E. Lagaris; D.G. Papageorgiou

This paper presents a software infrastructure for high performance numerical optimization on clusters of multicore systems. At the core, a runtime system implements a programming and execution environment for irregular and adaptive task-based parallelism. Building on this, we extract and exploit the parallelism of a global optimization application at multiple levels, which include Hessian calculations and Newton-based local optimizations. We discuss parallel implementations details and task distribution schemes for managing nested parallelism. Finally, we report experimental performance results for all the components of our software system on a multicore cluster.


international conference on high performance computing and simulation | 2011

Task-parallel global optimization with application to protein folding

Constantinos Voglis; Panagiotis E. Hadjidoukas; Vassilios V. Dimakopoulos; Isaac E. Lagaris; D.G. Papageorgiou

This paper presents a software framework for high performance numerical global optimization. At the core, a runtime library implements a programming environment for irregular and adaptive task-based parallelism. Building on this, we extract and exploit the multilevel parallelism of a global optimization application that is based on numerical differentiation and Newton-based local optimizations. Our framework is used in the efficient parallelization of a real application case that concerns the protein folding problem. The experimental evaluation presents performance results of our software system on a multicore cluster.


Computer Physics Communications | 2015

p-MEMPSODE: Parallel and irregular memetic global optimization ☆

Constantinos Voglis; Panagiotis E. Hadjidoukas; Konstantinos E. Parsopoulos; D.G. Papageorgiou; Isaac E. Lagaris; Michael N. Vrahatis

Abstract A parallel memetic global optimization algorithm suitable for shared memory multicore systems is proposed and analyzed. The considered algorithm combines two well-known and widely used population-based stochastic algorithms, namely Particle Swarm Optimization and Differential Evolution, with two efficient and parallelizable local search procedures. The sequential version of the algorithm was first introduced as MEMPSODE (MEMetic Particle Swarm Optimization and Differential Evolution) and published in the CPC program library. We exploit the inherent and highly irregular parallelism of the memetic global optimization algorithm by means of a dynamic and multilevel approach based on the OpenMP tasking model. In our case, tasks correspond to local optimization procedures or simple function evaluations. Parallelization occurs at each iteration step of the memetic algorithm without affecting its searching efficiency. The proposed implementation, for the same random seed, reaches the same solution irrespectively of being executed sequentially or in parallel. Extensive experimental evaluation has been performed in order to illustrate the speedup achieved on a shared-memory multicore server. Program summary Program title: p-MEMPSODE Catalogue identifier: AEXJ_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEXJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9950 No. of bytes in distributed program, including test data, etc.: 141503 Distribution format: tar.gz Programming language: ANSI C. Computer: Workstation. Operating system: Developed under the Linux operating system using the GNU compilers v.4.4.3 (or higher). Uses the OpenMP API and runtime system. RAM: The code uses O ( n × N ) internal storage, n being the dimension of the problem and N the maximum population size. The required memory is dynamically allocated. Word size: 64 Classification: 4.9. Nature of problem: Numerical global optimization of real valued functions is an indispensable methodology for solving a multitude of problems in science and engineering. Many problems exhibit a number of local and/or global minimizers, expensive function evaluations or require real-time response. In addition, discontinuities of the objective function, non-smooth and deceitful landscapes constitute challenging obstacles for most optimization algorithms. Solution method: We implement a memetic global optimization algorithm that combines stochastic, population-based methods with deterministic local search procedures. More specifically, the Unified Particle Swarm Optimization and the Differential Evolution algorithms are harnessed with the derivative-free Torczon’s Multi-Directional Search and the gradient-based BFGS method. The produced hybrid algorithms possess inherent parallelism that is exploited efficiently by means of the OpenMP tasking model. Given the same random seed, the proposed implementation reaches the same solution irrespective of being executed sequentially or in parallel. Restrictions: The current version of the software uses only double precision arithmetic. An OpenMP-enabled (version 3.0 or higher) compiler is required. Unusual features: The software requires bound constraints on the optimization variables. Running time: The running time depends on the complexity of the objective function (and its derivatives if used) as well as on the number of available cores. Extensive experimental results demonstrate that the speedup closely approximates ideal values.


Proceedings of the Seventh International Workshop | 2006

A FRAMEWORK FOR FUZZY EXPERT SYSTEM CREATION

Markos G. Tsipouras; Constantinos Voglis; Isaak A. Lagaris; Dimitrios I. Fotiadis

In this work a framework for the development and tuning of a fuzzy expert system is proposed. Given an initial set of crisp rules, the methodology consists of three steps: (a) disjunctive normal form representation of the crisp rules, (b) creation of a fuzzy model and (c) tuning of the model using a global optimization method. The proposed methodology is evaluated in the arrhythmia classification problem, using only the RR interval signal. Expert cardiologists determined an initial set of rules, which is used for the creation of a fuzzy model. Four types of cardiac rhythms are classified: normal sinus rhythm, ventricular flutter/fibrillation, premature ventricular contractions and 2o heart block. The results indicate sensitivity (average for all categories) 94%, specificity 98% and positive predictive value 94%.


Applied Soft Computing | 2013

A parallel hybrid optimization algorithm for fitting interatomic potentials

Constantinos Voglis; Panagiotis E. Hadjidoukas; D.G. Papageorgiou; Isaac E. Lagaris

In this work we present the parallel implementation of a hybrid global optimization algorithm assembled specifically to tackle a class of time consuming interatomic potential fitting problems. The resulting objective function is characterized by large and varying execution times, discontinuity and lack of derivative information. The presented global optimization algorithm corresponds to an irregular, two-level execution task graph where tasks are spawned dynamically. We use the OpenMP tasking model to express the inherent parallelism of the algorithm on shared-memory systems and a runtime library which implements the execution environment for adaptive task-based parallelism on multicore clusters. We describe in detail the hybrid global optimization algorithm and various parallel implementation issues. The proposed methodology is then applied to a specific instance of the interatomic potential fitting problem for the metal titanium. Extensive numerical experiments indicate that the proposed algorithm achieves the best parallel performance. In addition, its serial implementation performs well and therefore can also be used as a general purpose optimization algorithm.


Archive | 2011

Applying PSO and DE on Multi-Item Inventory Problem with Supplier Selection

Grigoris S. Piperagkas; Constantinos Voglis; K. Skouri


Nuclear Physics | 2007

Global minimization in few-body systems

Constantinos Voglis; Isaac E. Lagaris; M. L. Lekala; G. J. Rampho; S.A. Sofianos

Collaboration


Dive into the Constantinos Voglis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge