Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey Dean Kelly is active.

Publication


Featured researches published by Jeffrey Dean Kelly.


Computers & Chemical Engineering | 2008

Hierarchical decomposition heuristic for scheduling: Coordinated reasoning for decentralized and distributed decision-making problems

Jeffrey Dean Kelly; Danielle Zyngier

Abstract This paper presents a new technique for decomposing and rationalizing large decision-making problems into a common and consistent framework. We call this the hierarchical decomposition heuristic (HDH) which focuses on obtaining “globally feasible” solutions to the overall problem, i.e., solutions which are feasible for all decision-making elements in a system. The HDH is primarily intended to be applied as a standalone tool for managing a decentralized and distributed system when only globally consistent solutions are necessary or as a lower bound to a maximization problem within a global optimization strategy such as Lagrangean decomposition. An industrial scale scheduling example is presented that demonstrates the abilities of the HDH as an iterative and integrated methodology in addition to three small motivating examples. Also illustrated is the HDHs ability to support several types of coordinated and collaborative interactions.


Archive | 2009

Multi-Product Inventory Logistics Modeling in the Process Industries

Danielle Zyngier; Jeffrey Dean Kelly

In this chapter the mathematical modeling of several types of inventories are detailed. The inventory types are classified as batch processes, pools, pipe lines, pile lines and parcels. The key construct for all inventory models is the “fill-hold/haul-draw” fractal found in all discontinuous inventory or holdup modeling. The equipment, vessel or unit must first be “filled” unless enough product is already held in the unit. Product can then be “held” or “hauled” for a definite (fixed) or indefinite (variable) amount of time and then “drawn” out of the unit when required. Mixed-integer linear programming (MILP) modeling formulations are presented for five different types of logistics inventory models which are computationally efficient and can be readily applied to industrial decision-making problems.


Computers & Chemical Engineering | 2004

Flowsheet decomposition heuristic for scheduling: a relax-and-fix method

Jeffrey Dean Kelly; John L. Mann

Abstract Decomposing large problems into several smaller subproblems is well known in any problem solving endeavor and forms the basis for our flowsheet decomposition heuristic (FDH) described in this short note. It can be used as an effective strategy to decrease the time necessary to find good integer-feasible solutions when solving closed-shop scheduling problems found in the process industries. The technique is to appropriately assign each piece of equipment (i.e., process-units and storage-vessels) into groups and then to sequence these groups according to the material-flow-path of the production network following the engineering structure of the problem. As many mixed-integer linear programming (MILP) problems are solved as there are groups, solved in a pre-specified order, fixing the binary variables after each MILP and proceeding to the next. In each MILP, only the binary variables associated with the current group are explicit search variables. The others associated with the unsearched on binary variables (or the next in-line equipment) are relaxed. Three examples are detailed which establishes the effectiveness of this relax-and-fix type heuristic.


Computers & Chemical Engineering | 2004

Techniques for solving industrial nonlinear data reconciliation problems

Jeffrey Dean Kelly

Abstract The focus of this short note is to highlight several techniques to solve industrial nonlinear data reconciliation problems. The main areas of discussion are starting value generation, row and column scaling, regularization of the kernel matrix, using different and independent unconstrained solving methods such as ridge regression, matrix projection, Newton’s method and singular value decomposition and infeasibility handling. These techniques are usually necessary to arrive at solutions to nonlinear reconciliation problems which are poorly initiated, ill-conditioned and even inconsistent. A relatively large and well-studied numerical example is solved taken from the mining process industry which demonstrates some of the techniques discussed.


Computers & Chemical Engineering | 2003

Smooth-and-dive accelerator: a pre-MILP primal heuristic applied to scheduling

Jeffrey Dean Kelly

Abstract This article describes an effective and simple primal heuristic to greedily encourage a reduction in the number of binary or 0–1 logic variables before an implicit enumerative-type search heuristic is deployed to find integer-feasible solutions to ‘hard’ production scheduling problems. The basis of the technique is to employ well-known smoothing functions used to solve complementarity problems to the local optimization problem of minimizing the weighted sum over all binary variables the product of themselves multiplied by their complement. The basic algorithm of the ‘smooth-and-dive accelerator’ (SDA) is to solve successive linear programming (LP) relaxations with the smoothing functions added to the existing problems objective function and to use, if required, a sequence of binary variable fixings known as ‘diving’. If the smoothing function term is not driven to zero as part of the recursion then a branch-and-bound or branch-and-cut search heuristic is called to close the procedure finding at least integer-feasible primal infeasible solutions. The heuristics effectiveness is illustrated by its application to an oil-refinerys crude-oil blendshop scheduling problem, which has commonality to many other production scheduling problems in the continuous and semi-continuous (CSC) process domains.


Computers & Chemical Engineering | 1998

A regularization approach to the reconciliation of constrained data sets

Jeffrey Dean Kelly

Abstract A new iterative solution to the statistical adjustment of constrained data sets is derived in this paper. The method is general and may be applied to any weighted least squares problem containing nonlinear equality constraints. Other methods are available to solve this class of problem, but are complicated when unmeasured variables and model parameters are not all observable and the model constraints are not all independent. Of notable exception, however, are the methods of Crowe (1986) and Pai and Fisher (1988), although these implementations require the determination of a matrix projection at each iteration which may be computationally expensive. An alternative solution which makes the pragmatic assumption that the unmeasured variables and model parameters are known with a finite but equal uncertainty is proposed. We then re-formulate the well known data reconciliation solution in the absence of these unknowns to arrive at our new solution; hence the regularization approach. Another procedure for the classification of observable and redundant variables which does not require the explicit computation of the matrix projection is also given. The new algorithm is demonstrated using three illustrative examples previously used in other studies.


Computers & Chemical Engineering | 2004

Formulating large-scale quantity¿quality bilinear data reconciliation problems

Jeffrey Dean Kelly

Abstract This short note describes the relevant details of formulating and implementing general bilinear quantity–quality balances found in industrial processes when data reconciliation is applied. The modeling also allows for the straightforward generation of analytical first-order derivatives. Quantity–quality balance problems are those that involve both extensive and intensive stream variables such as flows and compositions, respectively and are related through laws of conservation of material, energy and momentum. The balance equations involve both linear and bilinear terms (multi-linear) of quantity and quality where quantity times quantity and quality times quality are not germane although they can be included easily. Two numerical examples are provided to demonstrate the new formulation technique.


Computers & Chemical Engineering | 1999

Reconciliation of process data using other projection matrices

Jeffrey Dean Kelly

Two other projection matrices, used in the solution of data reconciliation problems, are described in this short note. The first matrix projection introduced is straightforward to compute, is idempotent and can be easily updated when a measurement is deleted or removed from the problem. The second projection matrix, while although being more numerically intensive in its computation than the first, may prove superior when ill-behaved or ill-conditioned systems are reconciled given that it employs the very numerically stable singular value decomposition. Two small examples, one non-linear and the other linear, are presented which demonstrate the use of the new projection matrices and serve as a comparison to the well-known matrix projection of Crowe, Garcia, & Hrymak (1983).


Archive | 2017

Enterprise-Wide Optimization for Operations of Crude-Oil Refineries: Closing the Procurement and Scheduling Gap

Brenno C. Menezes; Ignacio E. Grossmann; Jeffrey Dean Kelly

Abstract We propose a quantitative analysis of an enterprise-wide optimization for operations of crude-oil refineries considering the integration of planning and scheduling to close the decision-making gap between the procurement of raw materials or feedstocks and the operations of the production scheduling. From a month to an hour, re-planning and re-scheduling iterations can better predict the processed crude-oil basket, diet or final composition, reducing the production costs and impacts in the process and product demands with respect to the quality of the raw materials. The goal is to interface planning and scheduling decisions within a time-window of a week with the support of re-optimization steps. Then, the selection, delivery, storage and mixture of crude-oil feeds from the tactical procurement planning up to the blend scheduling operations are made more appropriately. The up-to-down sequence of solutions are integrated in a feedback iteration to both reduce time-grids and as a key performance indicator.


Archive | 2017

Decision Automation for Oil and Gas Well Startup Scheduling Using MILP

Jeffrey Dean Kelly; Brenno C. Menezes; Ignacio E. Grossmann

Abstract A novel approach to scheduling the startup of oil and gas wells in multiple fields over a decade-plus discrete-time horizon is presented. The major innovation of our formulation is to treat each well or well type as a batch-process with time-varying yields or production rates that follow the declining, decaying or diminishing curve profile. Side or resource constraints such as process plant capacities, utilities and rigs to place the wells are included in the model. Current approaches to this long-term planning problem in a monthly time-step use manual decision-making with simulators where many scenarios, samples or cases are required to facilitate the development of possible feasible solutions. Our solution to this problem uses mixed-integer linear programming (MILP) which automates the decision-making of deciding on which well to startup next to find optimized solutions. Plots of an illustrative example highlight the operation of the well startup system and the decaying production of wells.

Collaboration


Dive into the Jeffrey Dean Kelly's collaboration.

Top Co-Authors

Avatar

Brenno C. Menezes

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darci Odloak

University of São Paulo

View shared research outputs
Researchain Logo
Decentralizing Knowledge