Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Theodore B. Trafalis is active.

Publication


Featured researches published by Theodore B. Trafalis.


international symposium on neural networks | 2000

Support vector machine for regression and applications to financial forecasting

Theodore B. Trafalis

The main purpose of the paper is to compare the support vector machine (SVM) developed by Cortes and Vapnik (1995) with other techniques such as backpropagation and radial basis function (RBF) networks for financial forecasting applications. The theory of the SVM algorithm is based on statistical learning theory. Training of SVMs leads to a quadratic programming (QP) problem. Preliminary computational results for stock price prediction are also presented.


decision support systems | 2006

A hybrid model for exchange rate prediction

Theodore B. Trafalis

Exchange rate forecasting is an important problem. Several forecasting techniques have been proposed in order to gain some advantages. Most of them are either as good as random walk forecasting models or slightly worse. Some researchers argued that this shows the efficiency of the exchange market. We propose a two stage forecasting model which incorporates parametric techniques such as autoregressive integrated moving average (ARIMA), vector autoregressive (VAR) and co-integration techniques, and nonparametric techniques such as support vector regression (SVR) and artificial neural networks (ANN). Comparison of these models showed that input selection is very important. Furthermore, our findings show that the SVR technique outperforms the ANN for two input selection methods.


Optimization Methods & Software | 2007

Robust support vector machines for classification and computational issues

Theodore B. Trafalis; Robin C. Gilbert

In this paper, we investigate the theoretical and numerical aspects of robust classification using support vector machines (SVMs) by providing second order cone programming and linear programming formulations. SVMs are learning algorithms introduced by Vapnik used either for classification or regression. They show good generalization properties and they are based on statistical learning theory. The resulting learning problems are convex optimization problems suitable for application of primal-dual interior points methods. We investigate the training of a SVM in the case where a bounded perturbation is added to the value of an input x i ∈ℝ n . A robust SVM provides a decision function that is immune to data perturbations. We consider both cases where our training data are either linearly separable or non linearly separable respectively and provide computational results for real data sets.


International Journal of General Systems | 2008

Short term forecasting with support vector machines and application to stock price prediction

Theodore B. Trafalis

Forecasting a stock price movement is one of the most difficult problems in finance. The reason is that financial time series are complex, non stationary. Furthermore, it is also very difficult to predict this movement with parametric models. Instead of parametric models, we propose two techniques, which are data driven and non parametric. Based on the idea that excess returns would be possible with publicly available information, we developed two models in order to forecast the short term price movements by using technical indicators. Our assumption is that the future value of a stock price depends on the financial indicators although there is no parametric model to explain this relationship. This relationship comes from the technical analysis. Comparison shows that support vector regression (SVR) out performs the multi layer perceptron (MLP) networks for a short term prediction in terms of the mean square error. If the risk premium is used as a comparison criterion, then the SVR technique is as good as the MLP method or better.


Computational Statistics & Data Analysis | 2011

Robust weighted kernel logistic regression in imbalanced and rare events data

Maher Maalouf; Theodore B. Trafalis

Recent developments in computing and technology, along with the availability of large amounts of raw data, have contributed to the creation of many effective techniques and algorithms in the fields of pattern recognition and machine learning. The main objectives for developing these algorithms include identifying patterns within the available data or making predictions, or both. Great success has been achieved with many classification techniques in real-life applications. With regard to binary data classification in particular, analysis of data containing rare events or disproportionate class distributions poses a great challenge to industry and to the machine learning community. This study examines rare events (REs) with binary dependent variables containing many more non-events (zeros) than events (ones). These variables are difficult to predict and to explain as has been evidenced in the literature. This research combines rare events corrections to Logistic Regression (LR) with truncated Newton methods and applies these techniques to Kernel Logistic Regression (KLR). The resulting model, Rare Event Weighted Kernel Logistic Regression (RE-WKLR), is a combination of weighting, regularization, approximate numerical methods, kernelization, bias correction, and efficient implementation, all of which are critical to enabling RE-WKLR to be an effective and powerful method for predicting rare events. Comparing RE-WKLR to SVM and TR-KLR, using non-linearly separable, small and large binary rare event datasets, we find that RE-WKLR is as fast as TR-KLR and much faster than SVM. In addition, according to the statistical significance test, RE-WKLR is more accurate than both SVM and TR-KLR.


Iie Transactions | 2007

Kernel principal component analysis and support vector machines for stock price prediction

Theodore B. Trafalis

Technical indicators are used with two heuristic models, kernel principal component analysis and factor analysis in order to identify the most influential inputs for a forecasting model. Multilayer perceptron (MLP) networks and support vector regression (SVR) are used with different inputs. We assume that the future value of a stock price/return depends on the financial indicators although there is no parametric model to explain this relationship, which comes from the technical analysis. Comparison studies show that SVR and MLP networks require different inputs. Furthermore, proposed heuristic models produce better results than the studied data mining methods. In addition to this, we can say that there is no difference between MLP networks and SVR techniques when we compare their mean square error values.


Computers & Industrial Engineering | 2002

Data mining techniques for improved WSR-88D rainfall estimation

Theodore B. Trafalis; Michael B. Richman; A. White; Budi Santosa

The main objective of this paper is to utilize data mining and an intelligent system, Artificial Neural Networks (ANNs), to facilitate rainfall estimation. Ground truth rainfall data are necessary to apply intelligent systems techniques. A unique source of such data is the Oklahoma Mesonet. Recently, with the advent of a national network of advanced radars (i.e. WSR-88D), massive archived data sets have been created generating terabytes of data. Data mining can draw attention to meaningful structures in the archives of such radar data, particularly if guided by knowledge of how the atmosphere operates in rain producing systems.The WSR-88D records digital database contains three native variables: velocity, reflectivity, and spectrum width. However, current rainfall detection algorithms make use of only the reflectivity variable, leaving the other two to be exploited. The primary focus of the research is to capitalize on these additional radar variables at multiple elevation angles and multiple bins in the horizontal for precipitation prediction. Linear regression models and feedforward ANNs are used for precipitation prediction. Rainfall totals from the Oklahoma Mesonet are utilized for the training and verification data. Results for the linear modeling suggest that, taken separately, reflectivity and spectrum width models are highly significant. However, when the two are combined in one linear model, they are not significantly more accurate than reflectivity alone. All linear models are prone to underprediction when heavy rainfall occurred. The ANN results of reflectivity and spectrum width inputs show that a 250-5-1 architecture is least prone to underprediction of heavy rainfall amounts. When a three-part ANN was applied to reflectivity based on light, moderate to heavy rainfall, in addition to spectrum width, it estimated rainfall amounts most accurately of all methods examined.


Archive | 2013

Linear Discriminant Analysis

Petros Xanthopoulos; Panos M. Pardalos; Theodore B. Trafalis

In this chapter we discuss another popular data mining algorithm that can be used for supervised or unsupervised learning. Linear Discriminant Analysis (LDA) was proposed by R. Fischer in 1936. It consists in finding the projection hyperplane that minimizes the interclass variance and maximizes the distance between the projected means of the classes. Similarly to PCA, these two objectives can be solved by solving an eigenvalue problem with the corresponding eigenvector defining the hyperplane of interest. This hyperplane can be used for classification, dimensionality reduction and for interpretation of the importance of the given features. In the first part of the chapter we discuss the generic formulation of LDA whereas in the second we present the robust counterpart scheme originally proposed by Kim and Boyd. We also discuss the non linear extension of LDA through the kernel transformation.


systems and information engineering design symposium | 2004

Learning from student data

Kash Barker; Theodore B. Trafalis; T.R. Rhoads

An abundance of information is contained on every college campus. Many academic, demographic, and attitudinal variables are gathered for every student who steps on campus. Despite all this information, colleges still struggle with graduation rates. This is an apt example of an overload of information but a starvation of knowledge. This paper introduces the use of neural networks and support vector machines, both nonlinear discriminant methods, for classifying student graduation behavior from several academic, demographic, and attitudinal variables maintained about students at the University of Oklahoma


Journal of Global Optimization | 2002

A novel metaheuristics approach for continuous global optimization

Theodore B. Trafalis; Suat Kasap

This paper proposes a novel metaheuristics approach to find the global optimum of continuous global optimization problems with box constraints. This approach combines the characteristics of modern metaheuristics such as scatter search (SS), genetic algorithms (GAs), and tabu search (TS) and named as hybrid scatter genetic tabu (HSGT) search. The development of the HSGT search, parameter settings, experimentation, and efficiency of the HSGT search are discussed. The HSGT has been tested against a simulated annealing algorithm, a GA under the name GENOCOP, and a modified version of a hybrid scatter genetic (HSG) search by using 19 well known test functions. Applications to Neural Network training are also examined. From the computational results, the HSGT search proved to be quite effective in identifying the global optimum solution which makes the HSGT search a promising approach to solve the general nonlinear optimization problem.

Collaboration


Dive into the Theodore B. Trafalis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Budi Santosa

Sepuluh Nopember Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Couellan

Institut de Mathématiques de Toulouse

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Petros Xanthopoulos

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge