David F. Rogers
University of Cincinnati
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David F. Rogers.
Operations Research | 1991
David F. Rogers; Robert D. Plante; Richard T. Wong; James R. Evans
A fundamental issue in the use of optimization models is the tradeoff between the level of detail and the ease of using and solving the model. Aggregation and disaggregation techniques have proven to be valuable tools for manipulating data and determining the appropriate policies to employ for this tradeoff. Furthermore, aggregation and disaggregation techniques offer promise for solving large-scale optimization models, supply a set of promising methodologies for studying the underlying structure of both univariate and multivariate data sets, and provide a set of tools for manipulating data for different levels of decision makers. In this paper, we develop a general framework for aggregation and disaggregation methodology, survey previous work regarding aggregation and disaggregation techniques for optimization problems, illuminate the appropriate role of aggregation and disaggregation methodology for optimization applications, and propose future research directions.
Journal of Operations Management | 1991
Scott M. Shafer; David F. Rogers
Abstract Cellular manufacturing is employed to achieve efficiencies in production by exploiting similarities inherent in the production of parts. Specifically, parts with similar processing requirements are identified and the equipment necessary to process these groups of parts is identified and located together. Several important design objectives associated with cellular manufacturing are to: 1) reduce setup times, 2) produce parts cell complete, i.e., minimize intercellular movements of parts, 3) minimize investment in new equipment, and 4) maintain acceptable machine utilization levels. The goal of this research was to develop a cell formation procedure that directly addressed these design objectives. To achieve this, three goal programming models were developed corresponding to three unique situations: (1) setting up an entirely new system and purchasing all new equipment, (2) reorganizing the system using only existing equipment, and (3) reorganizing the system using existing equipment and some new equipment. Several assumptions were made in the development of the goal programming models. First, it was assumed that each part had a fixed routing. Also, it was assumed that the processing times for the parts at each machine, the demand for each part, and the capacity and cost of each machine were known. In addition it was assumed that a given machine type could be placed in more than one cell and that the sequence in which the parts are processed affects setup times. Finally, it was assumed that a batch of each part is produced every production cycle and that only one batch of parts is processed in a particular cell at any given time. Clearly defining the objectives and constraints associated with the cell formation problem is the major contribution of the three formulations. Correct identification of the problem and the relationships inherent to cellular manufacturing is a necessary first step in the decision process that heretofore has not received adequate attention. However, because of the large number of 0 1 variables contained in the goal programming formulations they are very difficult to solve for realistically-sized problems. Thus, a heuristic solution procedure is presented. The heuristic solution procedure involved partitioning the goal programming formulations into two Subproblems and solving them in successive stages. A numerical example is presented that illustrates the two-stage heuristic procedure.
International Journal of Production Research | 1993
Scott M. Shafer; David F. Rogers
An overview of similarity and dissimilarity measures applicable to cellular manufacturing is presented. First is an overview of general measures of association. Next, similarity and distance measures for determining part families are discussed. Subsequently, similarity and distance measures for clustering machine types are discussed. Finally, the paper is concluded with a discussion of the evolution of similarity measures applicable to cellular manufacturing.
International Journal of Production Research | 1993
Scott M. Shafer; David F. Rogers
A new similarity measure that is easy to calculate, intuitively appealing, and overcomes the bias inherent in many existing similarity measures is proposed. This new similarity measure is particularly useful for reducing the size of the cell formation problem thus reducing the computational burden. Additionally, an investigation comparing several measures of association is presented. Included in this investigation is a comparison of single linkage clustering, average linkage clustering, complete linkage clustering, and Wards method for clustering.
European Journal of Operational Research | 2010
George G. Polak; David F. Rogers; Dennis J. Sweeney
Recent extreme economic developments nearing a worst-case scenario motivate further examination of minimax linear programming approaches for portfolio optimization. Risk measured as the worst-case return is employed and a portfolio from maximizing returns subject to a risk threshold is constructed. Minimax model properties are developed and parametric analysis of the risk threshold connects this model to expected value along a continuum, revealing an efficient frontier segmenting investors by risk preference. Divergence of minimax model results from expected value is quantified and a set of possible prior distributions expressing a degree of Knightian uncertainty corresponding to risk preference determined. The minimax model will maximize return with respect to one of these prior distributions providing valuable insight regarding an investors risk attitude and decision behavior. Linear programming models for financial firms to assist individual investors to hedge against losses by buying insurance and a model for designing variable annuities are proposed.
European Journal of Operational Research | 2005
David F. Rogers; Shailesh S. Kulkarni
Abstract The problem of bivariate clustering for the simultaneous grouping of rows and columns of matrices was addressed with a mixed-integer linear programming model. The model was solved using conventional methodology for very small problems but solving even small to moderate-sized problems was a formidable challenge. Because of the NP-complete nature of this class of problems, a genetic algorithm was developed to solve realistically sized problems of larger dimensions. A commonly encountered example is the simultaneous clustering of parts into part families and machines into machine cells in a cellular manufacturing context for group technology. The attractiveness of employing the optimization model (with objective function being a sum of dissimilarity measures) to provide simultaneous grouping of part types and machine types was compared to solutions found by employing the commonly used grouping efficacy measure. For cellular manufacturing problem instances from the literature, the intrinsic differences between the objective of the proposed model and grouping efficacy is highlighted. The solution to the general model found by employing a genetic algorithm solution technique and applying a simple heuristic was shown to perform as well as other algorithms to find the commonly accepted best known solutions for grouping efficacy. Further examples in industrial purchasing behavior and market segmentation help reveal the general applicability of the model for obtaining natural clusters.
IEEE Transactions on Power Systems | 2013
David F. Rogers; George G. Polak
Several pure binary integer optimization models are developed for clustering time periods by similarity for electricity utilities seeking assistance with pricing strategies. The models include alternative objectives for characterizing various notions of within-cluster distances, admit as feasible only clusters that are contiguous, and allow for circularity, where time periods at the beginning and end of the planning cycle may be in the same cluster. Restrictions upon cluster size may conveniently be included without the need of additional constraints. The models are populated with a real-world dataset of electricity usage for 93 buildings and solutions and run-times attained by conventional optimization software are compared with those by dynamic programming, or by a greedy algorithm applicable to one of the models, that run in polynomial time. The results provide time-of-use segments that an electricity utility may employ for selective pricing for peak and off-peak time periods to influence demand for the purpose of load leveling.
Decision Sciences | 2013
Claudia R. Rosales; Uday S. Rao; David F. Rogers
We consider a supply chain structure with shipments from an external warehouse directly to retailers and compare two enhancement options: costly transshipment among retailers after demand has been realized vs. cost-free allocation to the retailers from the development of a centralized depot. Stochastic programming models are developed for both the transshipment and allocation structures. We study the impact of cost parameters and demand coefficient of variation on both system structures. Our results show an increasing convex relationship between average costs and demand coefficient of variation, and furthermore that this increase is more pronounced for the allocation structure. We employ simulation and nonlinear search techniques to computationally compare the cost performance of allocation and transshipment structures under a wide range of system parameters such as demand uncertainty and correlation; lead times from the external warehouse to retailers, from warehouse to central depot, and from depot to retailers; and transshipment, holding, and penalty costs. The transshipment approach is found to outperform allocation for a broad range of parameter inputs including many situations for which transshipment is not an economically sound decision for a single period. The insights provided enable the manager to choose whether to invest in reducing lead times or demand uncertainty and assist in the selection of investments across identical and nonidentical retailers.
Computers & Operations Research | 1999
Susan K. Norman; David F. Rogers; Martin S. Levy
Abstract A priori and a posteriori error bounds for a transportation problem model at different levels of aggregation were statistically compared. An experimental design was used to (1) examine the size and significance of correlation between all pairs of the a priori error bound, a posteriori error bound and actual error and (2) quantify the size and significance of the difference of a posteriori error bound from actual error. Two different methods for calculating a posteriori error bounds were utilized. Results are for aggregating customers in a transportation model using one aggregation strategy and varying the level of aggregation on a set of randomly generated problems. Results show significant correlation between a posteriori error and actual error. A priori error is not significantly correlated with actual error. These preliminary results indicate that calculating the a posteriori error bound to select the appropriate aggregation level is a helpful strategy since the a posteriori bound varies in the same way that the actual error varies. In addition, one method of calculating the a posteriori bound is determined to be significantly tighter than the other method of calculating the a posteriori error bound. Scope and purpose Aggregation/disaggregation techniques are used to reduce a large model to a smaller model. Error bounds for the objective function of an aggregated linear programming model quantify the loss of information due to aggregation. There are two types of error bounds, a priori and a posteriori, and several methods to calculate the a posteriori bound. This is an exploratory study to determine if these error bounds vary consistently when different levels of aggregation are applied to the customers of the same transportation problem model and to statistically compare the tightness of these bounds. Results show that under the conditions of this study there is significant correlation between the actual aggregation error and a posteriori error. This implies that aggregation levels that produce large a posteriori error bounds are associated with large actual error and aggregation levels that produce small a posteriori error bounds are associated with small actual error. Results also show that of the two methods used to calculate the a posteriori bound, one consistently determines a tighter bound.
Manufacturing Research and Technology | 1995
David F. Rogers; S.M. Shafer
Publisher Summary A performance measure is considered meaningful when it is related to one or more of the design objectives associated with cellular manufacturing. In this chapter, several design objectives associated with cellular manufacturing are identified. Then, based upon these design objectives, appropriate performance measures are discussed and compared. Also included is a review and critique of performance measures used in previous studies for comparing cell formation procedures. In recent decades, cellular manufacturing (CM) has emerged as a promising approach for improving operations in batch and job shop environments, particularly in situations for which the divisions of the production processes are distinct. For CM, parts with similar processing requirements are identified and grouped together to form part families. An objective of this chapter is to offer a framework for comparing alternate cell formation procedures. The chapter reviews the studies for comparing various cell formation procedures. A framework for comparing alternative cell formation solutions is provided. Most of the performance measures in this chapter may be rationally utilized to gauge a CM configuration depending upon the situation encountered and the managerially stated objectives. Assessing machine utilization levels can be accomplished statically or dynamically. Resource utilization measures are among the most common measures employed in simulation analysis. Another consideration related to machine utilization is the way in which the machines are weighted. Average machine utilization with all machines weighted equally may frequently not be very appropriate because in most plants only a few machines are critical.