Devon K. Barrow
Coventry University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Devon K. Barrow.
Computers in Education | 2013
Antonija Mitrovic; Stellan Ohlsson; Devon K. Barrow
Tutoring technologies for supporting learning from errors via negative feedback are highly developed and have proven their worth in empirical evaluations. However, observations of empirical tutoring dialogs highlight the importance of positive feedback in the practice of expert tutoring. We hypothesize that positive feedback works by reducing student uncertainty about tentative but correct problem solving steps. Positive feedback should communicate three pieces of explanatory information: (a) those features of the situation that made the action the correct one, both in general terms and with reference to the specifics of the problem state; (b) the description of the action at a conceptual level and (c) the important aspect of the change in the problem state brought about by the action. We describe how a positive feedback capability was implemented in a mature, constraint-based tutoring system, SQL-Tutor, which teaches by helping students learn from their errors. Empirical evaluation shows that students who were interacting with the augmented version of SQL-Tutor learned at twice the speed as the students who interacted with the standard, error feedback only, version. We compare our approach with some alternative techniques to provide positive feedback in intelligent tutoring systems.
international symposium on neural networks | 2010
Devon K. Barrow; Sven F. Crone; Nikolaos Kourentzes
Ensemble methods represent an approach to combine a set of models, each capable of solving a given task, but which together produce a composite global model whose accuracy and robustness exceeds that of the individual models. Ensembles of neural networks have traditionally been applied to machine learning and pattern recognition but more recently have been applied to forecasting of time series data. Several methods have been developed to produce neural network ensembles ranging from taking a simple average of individual model outputs to more complex methods such as bagging and boosting. Which ensemble method is best; what factors affect ensemble performance, under what data conditions are ensembles most useful and when is it beneficial to use ensembles over model selection are a few questions which remain unanswered. In this paper we present some initial findings using neural network ensembles based on the mean and median applied to forecast synthetic time series data. We vary factors such as the number of models included in the ensemble and how the models are selected, whether randomly or based on performance. We compare the performance of different ensembles to model selection and present the results.
intelligent tutoring systems | 2008
Devon K. Barrow; Antonija Mitrovic; Stellan Ohlsson; Michael Grimley
Most existing Intelligent Tutoring Systems (ITSs) are built around cognitive learning theories, such as Ohlssons theory of learning from performance errors and Andersons ACT theories of skill acquisition, which focus primarily on providing negative feedback, facilitating learning by correcting errors. Research into the behavior of expert tutors suggest that experienced tutors use positive feedback quite extensively and successfully. This paper investigates positive feedback; learning by capturing and responding to correct behavior, supported by cognitive learning theories. Our aim is to develop and implement a systematic approach to delivering positive feedback in ITSs. We report on an evaluation study done in the context of SQL-Tutor, in which the control group used the original version of the system giving only negative feedback, while the experimental group received both negative and positive feedback. Results show that the experimental group students needed significantly less time to solve the same number of problems, in fewer attempts compared to those in the control group. Students in the experimental group also learn approximately the same number of concepts as students in the control group, but in much less time. This indicates that positive feedback facilitates learning and improves the effectiveness of learning in ITSs.
international symposium on neural networks | 2013
Devon K. Barrow; Sven F. Crone
In classification, regression and time series prediction alike, cross-validation is widely employed to estimate the expected accuracy of a predictive algorithm by averaging predictive errors across mutually exclusive subsamples of the data. Similarly, bootstrapping aims to increase the validity of estimating the expected accuracy by repeatedly sub-sampling the data with replacement, creating overlapping samples of the data. Estimates are then used to anticipate of future risk in decision making, or to guide model selection where multiple candidates are feasible. Beyond error estimation, bootstrapping has recently been extended to combine each of the diverse models created for estimation, and aggregating over each of their predictions (rather than their errors), coined bootstrap aggregation or bagging. However, similar extensions of cross-validation to create diverse forecasting models have not been considered. In accordance with bagging, we propose to combine the benefits of cross-validation and forecast aggregation, i.e. crogging. We assesses different levels of cross-validation, including a (single-fold) hold-out approach, 2-fold and 10-fold cross validation and Monte-Carlos cross validation, to create diverse base-models of neural networks for time series prediction trained on different data subsets, and average their individual multiple-step ahead predictions. Results of forecasting the 111 time series of the NN3 competition indicate significant improvements accuracy through Crogging relative to Bagging or individual model selection of neural networks.
European Journal of Operational Research | 2018
Devon K. Barrow; Nikolaos Kourentzes
A key challenge for call centres remains the forecasting of high frequency call arrivals collected in hourly or shorter time buckets. In addition to the complex intraday, intraweek and intrayear seasonal cycles, call arrival data typically contain a large number of anomalous days, driven by the occurrence of holidays, special events, promotional activities and system failures. This study evaluates the use of a variety of univariate time series forecasting methods for forecasting intraday call arrivals in the presence of such outliers. Apart from established, statistical methods, we consider artificial neural networks (ANNs). Based on the modelling flexibility of the latter, we introduce and evaluate different methods to encode the outlying periods. Using intraday arrival series from a call centre operated by one of Europe’s leading entertainment companies, we provide new insights on the impact of outliers on the performance of established forecasting methods. Results show that ANNs forecast call centre data accurately, and are capable of modelling complex outliers using relatively simple outlier modelling approaches. We argue that the relative complexity of ANNs over standard statistical models is offset by the simplicity of coding multiple and unknown effects during outlying periods.
Expert Systems With Applications | 2014
Nikolaos Kourentzes; Devon K. Barrow; Sven F. Crone
International Journal of Production Economics | 2016
Devon K. Barrow; Nikolaos Kourentzes
International Journal of Forecasting | 2016
Devon K. Barrow; Sven F. Crone
International Journal of Forecasting | 2016
Devon K. Barrow; Sven F. Crone
Journal of Business Research | 2016
Devon K. Barrow