Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sven F. Crone is active.

Publication


Featured researches published by Sven F. Crone.


Journal of the Operational Research Society | 2008

Forecasting and operational research: a review

Robert Fildes; Konstantinos Nikolopoulos; Sven F. Crone; Aris A. Syntetos

From its foundation, operational research (OR) has made many substantial contributions to practical forecasting in organizations. Equally, researchers in other disciplines have influenced forecasting practice. Since the last survey articles in JORS, forecasting has developed as a discipline with its own journals. While the effect of this increased specialization has been a narrowing of the scope of ORs interest in forecasting, research from an OR perspective remains vigorous. OR has been more receptive than other disciplines to the specialist research published in the forecasting journals, capitalizing on some of their key findings. In this paper, we identify the particular topics of OR interest over the past 25 years. After a brief summary of the current research in forecasting methods, we examine those topic areas that have grabbed the attention of OR researchers: computationally intensive methods and applications in operations and marketing. Applications in operations have proved particularly important, including the management of inventories and the effects of sharing forecast information across the supply chain. The second area of application is marketing, including customer relationship management using data mining and computer-intensive methods. The paper concludes by arguing that the unique contribution that OR can continue to make to forecasting is through developing models that link the effectiveness of new forecasting methods to the organizational context in which the models will be applied. The benefits of examining the system rather than its separate components are likely to be substantial.


European Journal of Operational Research | 2006

The impact of preprocessing on data mining: an evaluation of classifier sensitivity in direct marketing

Sven F. Crone; Stefan Lessmann; Robert Stahlbock

Abstract Corporate data mining faces the challenge of systematic knowledge discovery in large data streams to support managerial decision making. While research in operations research, direct marketing and machine learning focuses on the analysis and design of data mining algorithms, the interaction of data mining with the preceding phase of data preprocessing has not been investigated in detail. This paper investigates the influence of different preprocessing techniques of attribute scaling, sampling, coding of categorical as well as coding of continuous attributes on the classifier performance of decision trees, neural networks and support vector machines. The impact of different preprocessing choices is assessed on a real world dataset from direct marketing using a multifactorial analysis of variance on various performance metrics and method parameterisations. Our case-based analysis provides empirical evidence that data preprocessing has a significant impact on predictive accuracy, with certain schemes proving inferior to competitive approaches. In addition, it is found that (1) selected methods prove almost as sensitive to different data representations as to method parameterisations, indicating the potential for increased performance through effective preprocessing; (2) the impact of preprocessing schemes varies by method, indicating different ‘best practice’ setups to facilitate superior results of a particular method; (3) algorithmic sensitivity towards preprocessing is consequently an important criterion in method evaluation and selection which needs to be considered together with traditional metrics of predictive power and computational efficiency in predictive data mining.


international joint conference on neural network | 2006

Genetic Algorithms for Support Vector Machine Model Selection

Stefan Lessmann; Robert Stahlbock; Sven F. Crone

The support vector machine is a powerful classifier that has been successfully applied to a broad range of pattern recognition problems in various domains, e.g. corporate decision making, text and image recognition or medical diagnosis. Support vector machines belong to the group of semiparametric classifiers. The selection of appropriate parameters, formally known as model selection, is crucial to obtain accurate classification results for a given task. Striving to automate model selection for support vector machines we apply a meta-strategy utilizing genetic algorithms to learn combined kernels in a data-driven manner and to determine all free kernel parameters. The model selection criterion is incorporated into a fitness function guiding the evolutionary process of classifier construction. We consider two types of criteria consisting of empirical estimators or theoretical bounds for the generalization error. We evaluate their effectiveness in an empirical study on four well known benchmark data sets to find that both are applicable fitness measures for constructing accurate classifiers and conducting model selection. However, model selection focuses on finding one best classifier while genetic algorithms are based on the idea of re-combining and mutating a large number of good candidate classifiers to realize further improvements. It is shown that the empirical estimator is the superior fitness criterion in this sense, leading to a greater number of promising models on average.


Journal of intelligent systems | 2005

Stepwise Selection of Artificial Neural Network Models for Time Series Prediction

Sven F. Crone

Various heuristic approaches have been proposed to limit design complexity and computing time in artificial neural network modelling, parameterisation and selection for time series prediction. However, no single approach demonstrates robust superiority on arbitrary datasets, causing additional decision problems and a trial-and-error approach to network modelling. To reflect this, we propose an extensive modelling approach exploiting available computational power to generate a multitude of models. This shifts the emphasis from evaluating different heuristic rules towards the valid and reliable selection of a single network architecture from a population of models, a common problem domain in forecasting competitions in general and the evaluation of hybrid systems of computational intelligence versus conventional methods. Experimental predictions are computed for the airline passenger data using variants of a multilayer perceptron trained with backpropagation to minimize a mean squared error objective function, deriving a robust selection rule for superior prediction results.


knowledge discovery and data mining | 2005

Utility based data mining for time series analysis: cost-sensitive learning for neural network predictors

Sven F. Crone; Stefan Lessmann; Robert Stahlbock

In corporate data mining applications, cost-sensitive learning is firmly established for predictive classification algorithms. Conversely, data mining methods for regression and time series analysis generally disregard economic utility and apply simple accuracy measures. Methods from statistics and computational intelligence alike minimise a symmetric statistical error, such as the sum of squared errors, to model ordinary least squares predictors. However, applications in business elucidate that real forecasting problems contain non-symmetric errors. The costs arising from over- versus underprediction are dissimilar for errors of identical magnitude, requiring an ex-post correction of the prediction to derive valid decisions. To reflect this, an asymmetric cost function is developed and employed as the objective function for neural network training, deriving superior forecasts and a cost efficient decision. Experimental results for a business scenario of inventory-levels are computed using a multilayer perceptron trained with different objective functions, evaluating the performance in competition to statistical forecasting methods.


international joint conference on neural network | 2006

Forecasting with Computational Intelligence - An Evaluation of Support Vector Regression and Artificial Neural Networks for Time Series Prediction

Sven F. Crone; Stefan Lessmann; Swantje Pietsch

Recently, novel algorithms of support vector regression and neural networks have received increasing attention in time series prediction. While they offer attractive theoretical properties, they have demonstrated only mixed results within real world application domains of particular time series structures and patterns. Commonly, time series are composed of a combination of regular patterns such as levels, trends and seasonal variations. Thus, the capability of novel methods to predict basic time series patterns is of particular relevance in evaluating their initial contribution to forecasting. This paper investigates the accuracy of competing forecasting methods of NN and SVR through an exhaustive empirical comparison of alternatively tuned candidate models on 36 artificial time series. Results obtained show that SVR and NN provide comparative accuracy and robustly outperform statistical methods on selected time series patterns.


international conference on artificial intelligence in theory and practice | 2006

A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns

Sven F. Crone; Jose A. Guajardo; Richard Weber

Recently, novel learning algorithms such as Support Vector Regression (SVR) and Neural Networks (NN) have received increasing attention in forecasting and time series prediction, offering attractive theoretical properties and successful applications in several real world problem domains. Commonly, time series are composed of the combination of regular and irregular patterns such as trends and cycles, seasonal variations, level shifts, outliers or pulses and structural breaks, among others. Conventional parametric statistical methods are capable of forecasting a particular combination of patterns through ex ante selection of an adequate model form and specific data preprocessing. Thus, the capability of semi-parametric methods from computational intelligence to predict basic time series patterns without model selection and preprocessing is of particular relevance in evaluating their contribution to forecasting. This paper proposes an empirical comparison between NN and SVR models using radial basis function (RBF) and linear kernel functions, by analyzing their predictive power on five artificial time series: stationary, additive seasonality, linear trend, linear trend with additive seasonality, and linear trend with multiplicative seasonality. Results obtained show that RBF SVR models have problems in extrapolating trends, while NN and linear SVR models without data preprocessing provide robust accuracy across all patterns and clearly outperform the commonly used RBF SVR on trended time series.


international symposium on neural networks | 2013

Multivariate k-nearest neighbour regression for time series data — A novel algorithm for forecasting UK electricity demand

Fahad H. Al-Qahtani; Sven F. Crone

The k-nearest neighbour (k-NN) algorithm is one of the most widely used benchmark algorithms in classification, supported by its simplicity and intuitiveness in finding similar instances in multivariate and large-dimensional feature spaces of arbitrary attribute scales. In contrast, only few scientific studies of k-NN exist in forecasting time series data, which have mainly assessed various distance metrics to identify similar univariate time series motifs in past data. In electricity load forecasting, k-NN studies are limited to identifying past motifs of the same dependent variable to match future realisations, in a non-causal approach to forecasting. However, causal information in the form of deterministic calendar information is readily available on past and future time series motifs, allowing the distinction between load profiles of working days, weekends and bank-holidays to be encoded as binary dummy variables, and to be efficiently included in the search for similar neighbours. In this paper, we propose a multivariate k-NN regression method for forecasting the electricity demand in the UK market which utilises binary dummy variables as a second feature to categorise the day being forecasted as a working day or a non-working day. We assess the efficacy of this approach in a robust empirical evaluation using UK electricity load data. The approach shows improvements beyond conventional k-NN approaches and accuracy beyond that of simple statistical benchmark methods.


international symposium on neural networks | 2010

An evaluation of neural network ensembles and model selection for time series prediction

Devon K. Barrow; Sven F. Crone; Nikolaos Kourentzes

Ensemble methods represent an approach to combine a set of models, each capable of solving a given task, but which together produce a composite global model whose accuracy and robustness exceeds that of the individual models. Ensembles of neural networks have traditionally been applied to machine learning and pattern recognition but more recently have been applied to forecasting of time series data. Several methods have been developed to produce neural network ensembles ranging from taking a simple average of individual model outputs to more complex methods such as bagging and boosting. Which ensemble method is best; what factors affect ensemble performance, under what data conditions are ensembles most useful and when is it beneficial to use ensembles over model selection are a few questions which remain unanswered. In this paper we present some initial findings using neural network ensembles based on the mean and median applied to forecast synthetic time series data. We vary factors such as the number of models included in the ensemble and how the models are selected, whether randomly or based on performance. We compare the performance of different ensembles to model selection and present the results.


international conference on neural information processing | 2002

Training artificial neural networks for time series prediction using asymmetric cost functions

Sven F. Crone

Artificial neural network theory generally minimises a standard statistical error, such as the sum of squared errors, to learn relationships from the presented data. However, applications in business have shown that real forecasting problems require alternative error measures. Errors, identical in magnitude, cause different costs. To reflect this, a set of asymmetric cost functions is proposed as novel error functions for neural network training. Consequently, a neural network minimizes an asymmetric cost function to derive forecasts considered preeminent regarding the original problem. Some experimental results in forecasting a stationary time series using a multilayer perceptron trained with a linear asymmetric cost function are computed, evaluating the performance in competition to basic forecast methods using various error measures.

Collaboration


Dive into the Sven F. Crone's collaboration.

Top Co-Authors

Avatar

Stefan Lessmann

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge