Yasin Gocgun
Istanbul Kemerburgaz University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yasin Gocgun.
European Journal of Operational Research | 2008
Ihsan Sabuncuoglu; Yasin Gocgun; Erdal Erel
Abstract Beam search (BS) is used as a heuristic to solve various combinatorial optimization problems, ranging from scheduling to assembly line balancing. In this paper, we develop a backtracking and an exchange-of-information (EOI) procedure to enhance the traditional beam search method. The backtracking enables us to return to previous solution states in the search process with the expectation of obtaining better solutions. The EOI is used to transfer information accumulated in a beam to other beams to yield improved solutions. We developed six different versions of enhanced beam algorithms to solve the mixed-model assembly line scheduling problem. The results of computational experiments indicate that the backtracking and EOI procedures that utilize problem specific information generally improve the solution quality of BS.
International Journal of Production Research | 2007
Erdal Erel; Yasin Gocgun; Ihsan Sabuncuoglu
In todays manufacturing environments, companies have to produce a large variety of products in small quantities on a single assembly line. In this paper, we use a beam search (BS) approach to solve the model-sequencing problem of mixed-model assembly lines (MMALs). Specifically, we develop six BS algorithms for part-usage variation and load-leveling performance measures. The results of computational experiments indicate that the proposed BS methods are competitive with the well-known heuristics in the literature. A comprehensive bibliography is also provided.
Health Care Management Science | 2014
Yasin Gocgun; Martin L. Puterman
We study a scheduling problem in which arriving patients require appointments at specific future days within a treatment specific time window. This research is motivated by a study of chemotherapy scheduling practices at the British Columbia Cancer Agency (Canada). We formulate this problem as a Markov Decision Process (MDP). Since the resulting MDPs are intractable to exact methods, we employ linear-programming-based Approximate Dynamic Programming (ADP) to obtain approximate solutions. Using simulation, we compare the performance of the resulting ADP policies to practical and easy-to-use heuristic decision rules under diverse scenarios. The results indicate that ADP is promising in several scenarios, and that a specific easy-to-use heuristic performs well in the idealized chemotherapy scheduling setting we study.
Artificial Intelligence in Medicine | 2011
Yasin Gocgun; Brian W. Bresnahan; Archis Ghate; Martin L. Gunn
OBJECTIVES To develop a mathematical model for multi-category patient scheduling decisions in computed tomography (CT), and to investigate associated tradeoffs from economic and operational perspectives. METHODS We modeled this decision-problem as a finite-horizon Markov decision process (MDP) with expected net CT revenue as the performance metric. The performance of optimal policies was compared with five heuristics using data from an urban hospital. In addition to net revenue, other patient-throughput and service-quality metrics were also used in this comparative analysis. RESULTS The optimal policy had a threshold structure in the two-scanner case - it prioritized one type of patient when the queue-length for that type exceeded a threshold. The net revenue gap between the optimal policy and the heuristics ranged from 5% to 12%. This gap was 4% higher in the more congested, single-scanner system than in the two-scanner system. The performance of the net revenue maximizing policy was similar to the heuristics, when compared with respect to the alternative performance metrics in the two-scanner case. Under the optimal policy, the average number of patients that were not scanned by the end of the day, and the average patient waiting-time, were both nearly 80% smaller in the two-scanner case than in the single-scanner case. The net revenue gap between the optimal policy and the priority-based heuristics was nearly 2% smaller as compared to the first-come-first-served and random selection schemes. Net revenue was most sensitive to inpatient (IP) penalty costs in the single-scanner system, whereas to IP and outpatient revenues in the two-scanner case. CONCLUSIONS The performance of the optimal policy is competitive with the operational and economic metrics considered in this paper. Such a policy can be implemented relatively easily and could be tested in practice in the future. The priority-based heuristics are next-best to the optimal policy and are much easier to implement.
winter simulation conference | 2010
Yasin Gocgun; Archis Ghate
We define a class of discrete-time resource allocation problems where multiple renewable resources must be dynamically allocated to different types of jobs arriving randomly. Jobs have geometric service durations, demand resources, incur a holding cost while waiting in queue, a penalty cost of rejection when the queue is filled to capacity, and generate a reward on completion. The goal is to select which jobs to service in each time-period so as to maximize total infinite-horizon discounted expected profit. We present Markov Decision Process (MDP) models of these problems and apply a Lagrangian relaxation-based method that exploits the structure of the MDP models to approximate their optimal value functions. We then develop a dynamic programming technique to efficiently recover resource allocation decisions from this approximate value function on the fly. Numerical experiments demonstrate that these decisions outperform well-known heuristics by at least 35% but as much as 220% on an average.
The Breast | 2015
Yasin Gocgun; Dragan Banjevic; Sharareh Taghipour; Bart J. Harvey; Andrew K. S. Jardine; Anthony B. Miller
In this paper, we study breast cancer screening policies using computer simulation. We developed a multi-state Markov model for breast cancer progression, considering both the screening and treatment stages of breast cancer. The parameters of our model were estimated through data from the Canadian National Breast Cancer Screening Study as well as data in the relevant literature. Using computer simulation, we evaluated various screening policies to study the impact of mammography screening for age-based subpopulations in Canada. We also performed sensitivity analysis to examine the impact of certain parameters on number of deaths and total costs. The analysis comparing screening policies reveals that a policy in which women belonging to the 40-49 age group are not screened, whereas those belonging to the 50-59 and 60-69 age groups are screened once every 5 years, outperforms others with respect to cost per life saved. Our analysis also indicates that increasing the screening frequencies for the 50-59 and 60-69 age groups decrease mortality, and that the average number of deaths generally decreases with an increase in screening frequency. We found that screening annually for all age groups is associated with the highest costs per life saved. Our analysis thus reveals that cost per life saved increases with an increase in screening frequency.
Health Care Management Science | 2018
Yasin Gocgun
We study radiation therapy scheduling problem where dynamically and stochastically arriving patients of different types are scheduled to future days. Unlike similar models in the literature, we consider cancellation of treatments. We formulate this dynamic multi-appointment patient scheduling problem as a Markov Decision Process (MDP). Since the MDP is intractable due to large state and action spaces, we employ a simulation-based approximate dynamic programming (ADP) approach to approximately solve our model. In particular, we develop Least-square based approximate policy iteration for solving our model. The performance of the ADP approach is compared with that of a myopic heuristic decision rule.
Operations Research | 2015
Steven M. Shechter; Farhad Ghassemi; Yasin Gocgun; Martin L. Puterman
We consider the search for a target whose precise location is uncertain. The search region is divided into grid cells, and the searcher decides which cell to visit next and whether to search it quickly or slowly. A quick search of a cell containing the target may damage it, resulting in a failed search, or it may locate the target safely. If the target is not in the cell, the search continues over the remaining cells. If a slow search is performed on a cell, then the search ends in failure with a fixed probability regardless of whether or not the target is in that cell e.g., because of enemy fire while performing the slow search. If the slow search survives this failure possibility, then the search ends in success if the target is in that cell; otherwise, the search continues over the remaining cells. We seek to minimize the probability of the search ending in failure and consider two types of rules for visiting cells: the unconstrained search, in which the searcher may visit cells in any order, and the constrained search, in which the searcher may only visit adjacent cells e.g., up, down, left, or right of cells already visited. We prove that the optimal policy for the unconstrained search is to search quickly some initial set of cells with the lowest probabilities of containing the target before slowly searching the remaining cells in decreasing order of probabilities. For the special case in which a quick search on a cell containing the target damages it with certainty, the optimal policy is to search all cells slowly, in decreasing order of probabilities. We use the optimal solution of the unconstrained search in a branch-and-bound optimal solution algorithm for the constrained search. For larger instances, we evaluate heuristics and approximate dynamic programming approaches for finding good solutions.
Operations Research Letters | 2017
Mahshid Salemi Parizi; Yasin Gocgun; Archis Ghate
Abstract We study non-preemptive scheduling problems where heterogeneous projects stochastically arrive over time. The projects include precedence-constrained tasks that require multiple resources. Incomplete projects are held in queues. When a queue is full, an arriving project must be rejected. The goal is to choose which tasks to start in each time-slot to maximize the infinite-horizon discounted expected profit. We provide a weakly coupled Markov decision process (MDP) formulation and apply a simulation-based approximate policy iteration method. Extensive numerical results are presented.
Computers & Operations Research | 2012
Yasin Gocgun; Archis Ghate