Gary J. Koehler
College of Business Administration
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gary J. Koehler.
IEEE Transactions on Engineering Management | 1994
Haldun Aytug; Siddhartha Bhattacharyya; Gary J. Koehler; Jane L. Snowdon
This paper has two primary purposes: to motivate the need for machine learning in scheduling systems and to survey work on machine learning in scheduling. In order to motivate the need for machine learning in scheduling, we briefly motivate the need for systems employing artificial intelligence methods for scheduling. This leads to a need for incorporating adaptive methods-learning. >
European Journal of Operational Research | 2008
Steven O. Kimbrough; Gary J. Koehler; Ming Lu; David Harlan Wood
We explore data-driven methods for gaining insight into the dynamics of a two-population genetic algorithm (GA), which has been effective in tests on constrained optimization problems. We track and compare one population of feasible solutions and another population of infeasible solutions. Feasible solutions are selected and bred to improve their objective function values. Infeasible solutions are selected and bred to reduce their constraint violations. Interbreeding between populations is completely indirect, that is, only through their offspring that happen to migrate to the other population. We introduce an empirical measure of distance, and apply it between individuals and between population centroids to monitor the progress of evolution. We find that the centroids of the two populations approach each other and stabilize. This is a valuable characterization of convergence. We find the infeasible population influences, and sometimes dominates, the genetic material of the optimum solution. Since the infeasible population is not evaluated by the objective function, it is free to explore boundary regions, where the optimum is likely to be found. Roughly speaking, the No Free Lunch theorems for optimization show that all blackbox algorithms (such as Genetic Algorithms) have the same average performance over the set of all problems. As such, our algorithm would, on average, be no better than random search or any other blackbox search method. However, we provide two general theorems that give conditions that render null the No Free Lunch results for the constrained optimization problem class we study. The approach taken here thereby escapes the No Free Lunch implications, per se.
Informs Journal on Computing | 1996
Haldun Aytug; Gary J. Koehler
Considerable empirical results have been reported on the computational performance of genetic algorithms but little has been studied on their convergence behavior or on stopping criteria. In this paper we derive bounds on the number of iterations required to achieve a level of confidence to guarantee that a genetic algorithm has seen all populations and, hence, an optimal solution.
European Journal of Operational Research | 2000
Haldun Aytug; Gary J. Koehler
Abstract Genetic Algorithms have been successfully applied in a wide variety of problems. Although widely used, there are few theoretical guidelines for determining when to terminate the search. One result by Aytug and Koehler provides a loose bound on the number of GA generations needed to see all populations (and hence, an optimal solution) with a specified probability. In this paper we derive a tighter bound. This new bound is on the number of iterations required to achieve a level of confidence to guarantee that a Genetic Algorithm has seen all strings (and, hence, an optimal solution).
European Journal of Operational Research | 2004
Mu Xia; Gary J. Koehler; Andrew B. Whinston
Abstract Single-item auctions have many desirable properties. Mechanisms exist to ensure optimality, incentive compatibility and market-clearing prices. When multiple items are offered through individual auctions, a bidder wanting a bundle of items faces an exposure problem if the bidder places a high value on a combination of goods but a low value on strict subsets of the desired collection. To remedy this, combinatorial auctions permit bids on bundles of goods. However, combinatorial auctions are hard to optimize and may not have incentive compatible mechanisms or market-clearing individual item prices. Several papers give approaches to provide incentive compatibility and imputed, individual prices. We find the relationships between these approaches and analyze their advantages and disadvantages.
decision support systems | 2008
Hong Guo; Juheng Zhang; Gary J. Koehler
This paper presents an overview and survey of a new type of game-theoretic setting based on ideas emanating from quantum computing. (We provide a brief overview of quantum computing at the beginning of the paper.) Initial results suggest this view brings more flexibility and possibilities into decisions involving game-theoretic considerations. Applications cover a broad spectrum of classical games as well as games in economics, finance and other areas.
Management Science | 2010
Mark Cecchini; Haldun Aytug; Gary J. Koehler; Praveen Pathak
This paper provides a methodology for detecting management fraud using basic financial data. The methodology is based on support vector machines. An important aspect therein is a kernel that increases the power of the learning machine by allowing an implicit and generally nonlinear mapping of points, usually into a higher dimensional feature space. A kernel specific to the domain of finance is developed. This financial kernel constructs features shown in prior research to be helpful in detecting management fraud. A large empirical data set was collected, which included quantitative financial attributes for fraudulent and nonfraudulent public companies. Support vector machines using the financial kernel correctly labeled 80% of the fraudulent cases and 90.6% of the nonfraudulent cases on a holdout set. Furthermore, we replicate other leading fraud research studies using our data and find that our method has the highest accuracy on fraudulent cases and competitive accuracy on nonfraudulent cases. The results validate the financial kernel together with support vector machines as a useful method for discriminating between fraudulent and nonfraudulent companies using only publicly available quantitative financial attributes. The results also show that the methodology has predictive value because, using only historical data, it was able to distinguish fraudulent from nonfraudulent companies in subsequent years.
decision support systems | 2010
Mark Cecchini; Haldun Aytug; Gary J. Koehler; Praveen Pathak
We develop a methodology for automatically analyzing text to aid in discriminating firms that encounter catastrophic financial events. The dictionaries we create from Management Discussion and Analysis Sections (MD&A) of 10-Ks discriminate fraudulent from non-fraudulent firms 75% of the time and bankrupt from nonbankrupt firms 80% of the time. Our results compare favorably with quantitative prediction methods. We further test for complementarities by merging quantitative data with text data. We achieve our best prediction results for both bankruptcy (83.87%) and fraud (81.97%) with the combined data, showing that that the text of the MD&A complements the quantitative financial information.
decision support systems | 2002
Joni L. Jones; Gary J. Koehler
The migration of auctions to the Internet provides a unique opportunity to harness the power of computing to create new auction forms that were previously impossible. We describe a new type of combinatorial auction that accepts rule-based bids. Allowing bids in the form of high-level rules relieves the buyer from the burden of enumerating all possible acceptable bundles. The allocation of goods requires solving a complex combinatorial problem, a task that is completely impractical in a conventional auction setting. We describe simplifying winner determination heuristics developed in this study to make large problems of this nature manageable.
Omega-international Journal of Management Science | 1995
Hyunsoo Kim; Gary J. Koehler
Induction methods have recently been found to be useful in a wide variety of business related problems, including in the construction of expert systems. Decision tree induction is an important type of inductive learning method. Empirical results have shown that pruning a decision tree sometimes improves its accuracy. In this paper we summarize theoretical results of pruning and illustrate these results with an example. We give a sample size sufficient for decision tree induction with pruning based on recently developed learning theory. For situations where it is difficult to obtain a large enough sample, we provide several methods for a posterior evaluation of the accuracy of a pruned decision tree. Finally we summarize conditions under which pruning is necessary for better prediction accuracy.