Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John Semple is active.

Publication


Featured researches published by John Semple.


Management Science | 2006

Quality-Based Competition, Profitability, and Variable Costs

Chester Chambers; Panos Kouvelis; John Semple

We consider the impact of variable production costs on competitive behavior in a duopoly where manufacturers compete on quality and price in a two-stage game. In the pricing stage, we make no assumptions regarding these costs---other than that they are positive and increasing in quality---and no assumptions about whether or not the market is covered. In the quality stage, we investigate a broad family of variable cost functions and show how the shape of these functions impacts equilibrium product positions, profits, and market coverage. We find that seemingly slight changes to the cost functions curvature can produce dramatically different equilibrium outcomes, including the degree of quality differentiation, which competitor is more profitable (the one offering higher or lower quality), and the nature of the market itself (covered or uncovered). Our model helps to predict and explain the diversity of outcomes we see in practice---something the previous literature has been unable to do.


Operations Research | 2006

Optimal Inventory Policy with Two Suppliers

Edward J. Fox; Richard Metters; John Semple

We analyze a periodic-review inventory model where the decision maker can buy from either of two suppliers. With the first supplier, the buyer incurs a high variable cost but negligible fixed cost; with the second supplier, the buyer incurs a lower variable cost but a substantial fixed cost. Consequently, ordering costs are piecewise linear and concave. We show that a reduced form of generalized (s, S) policy is optimal for both finite and (discounted) infinite-horizon problems, provided that the demand density is log-concave. This condition on the distribution is much less restrictive than in previous models. In particular, it applies to the normal, truncated normal, gamma, and beta distributions, which were previously excluded. We concentrate on the situation in which sales are lost, but explain how the policy, cost assumptions, and proofs can be altered for the case where excess demand is backordered. In the lost sales case, the optimal policy will have one of three possible forms: a base stock policy for purchasing exclusively at the high variable cost (HVC) supplier; an (sLVC, SLVC) policy for buying exclusively from the low variable cost (LVC) supplier; or a hybrid (s, SHVC, SLVC) policy for buying from both suppliers.


Journal of Quality Technology | 1997

The Computation of Global Optima in Dual Response Systems

Enrique Castillo; Shu-Kai Fan; John Semple

This paper presents an ANSI FORTRAN implementation of a new algorithm for the global optimization of dual response systems within a spherical region of interest. The algorithm, DRSALG, is a new computational method that guarantees, under conditions that..


European Journal of Operational Research | 1999

Optimization of dual response systems: A comprehensive procedure for degenerate and nondegenerate problems

Enrique Castillo; Shu-Kai Fan; John Semple

Most dual response systems (DRSs) arising in response surface modeling can be approximated using a nonlinear (and typically nonconvex) mathematical program involving two quadratic functions. One of the quadratic functions is used as the objective function, the other for imposing a target constraint. This paper describes an effective heuristic for computing global (or near-global) optimal solutions for this type of problem. The first part of the paper addresses the special case of degeneracy, a condition that makes the system more difficult to solve. Included are means for detecting degeneracy as well as issues relating to its likelihood in practice. The subsequent part of the paper describes our new procedure, AXIS, which rotates a degenerate problem and then decomposes it into a finite sequence of nondegenerate subproblems of lower dimension. The nondegenerate subproblems are solved using the algorithm DRSALG developed earlier. In the final parts of the paper, the AXIS and DRSALG algorithms are integrated into a single dual response solver termed DR2. DR2 is tested against two nonlinear optimization procedures that have been used frequently in dual response applications. The new solver proves to be extremely effective at locating best-practice operating conditions.


Operations Research | 2016

The Exponomial Choice Model: A New Alternative for Assortment and Price Optimization

Aydin Alptekinoglu; John Semple

We investigate the use of a canonical version of a discrete choice model due to Daganzo (1979) in optimal pricing and assortment planning. In contrast to multinomial and nested logit (the prevailing choice models used for optimizing prices and assortments), this model assumes a negatively skewed distribution of consumer utilities, an assumption we motivate by conceptual arguments as well as published work. The choice probabilities in this model can be derived in closed-form as an exponomial (a linear function of exponential terms). The pricing and assortment planning insights we obtain from the Exponomial Choice (EC) model differ from the literature in two important ways. First, the EC model allows variable markups in optimal prices that increase with expected utilities. Second, when prices are exogenous, the optimal assortment may exhibit leapfrogging in prices, i.e., a product can be skipped in favor of a lower-priced one depending on the utility positions of neighboring products. These two plausible pricing and assortment patterns are ruled out by multinomial logit (and by nested logit within each nest). We provide structural results on optimal pricing for monopoly and oligopoly cases, and on the optimal assortments for both exogenous and endogenous prices. We also demonstrate how the EC model can be easily estimated---by establishing that the loglikelihood function is concave in model parameters and detailing an estimation example using real data.


Iie Transactions | 2006

A heuristic for multi-item production with seasonal demand

Michael E. Ketzenberg; Richard Metters; John Semple

A heuristic is developed for a common production/inventory problem characterized by multiple products, stochastic seasonal demand, lost sales, and a constraint on overall production. Heuristics are needed since the calculation of optimal policies is impractical for real-world instances of this problem. The proposed heuristic is compared with those in current use as well as optimal solutions under a variety of conditions. The proposed heuristic is both near optimal and superior to existing heuristics. The heuristic deviated from optimality by an average of 1.7% in testing using dynamic programming as a benchmark. This compares favorably against linear-programming-based heuristics and practitioner heuristics, which deviated from optimality by 4.5 to 10.6%.


Management Science | 2010

A Dynamic Inventory Model with the Right of Refusal

Sreekumar R. Bhaskaran; John Semple

We consider a dynamic inventory (production) model with general convex order (production) costs and excess demand that can be accepted or refused by the firm. Excess demand that is accepted is backlogged and results in a backlog cost whereas demand that is refused results in a lost sales charge. Endogenizing the sales decision is appropriate in the presence of general convex order costs so that the firm is not forced to backlog a unit whose subsequent satisfaction would reduce total profits. In each period, the firm must determine the optimal order and sales strategy. We show that the optimal policy is characterized by an optimal buy-up-to level that increases with the initial inventory level and an order quantity that decreases with the initial inventory level. More importantly, we show the optimal sales strategy is characterized by a critical threshold, a backlog limit, that dictates when to stop selling. This threshold is independent of the initial inventory level and the amount purchased. We investigate various properties of this new policy. As demand stochastically increases, the amount purchased increases but the amount backlogged decreases, reflecting a shift in the way excess demand is managed. We develop two regularity conditions, one that ensures some backlogs are allowed in each period, and another that ensures the amount backlogged is nondecreasing in the length of the planning horizon. We illustrate the buy-up-to levels in our model are bounded above by buy-up-to levels from the pure lost sales and pure backlogging models. We explore additional extensions using numerical experiments.


Communications of The ACM | 2010

Data mining and revenue management methodologies in college admissions

Surya Rebbapragada; Amit Basu; John Semple

The competition for college admissions is getting fiercer each year with most colleges receiving record number of applications and hence becoming increasingly selective. The acceptance rate in some elite colleges is as low as 10%, and the uncertainty often causes talented students to apply to schools in the next tier. Students try to improve their chances of getting into a college by applying to multiple schools, with each school having its own timelines and deadlines for admissions. Consequently, students are often caught in a dilemma when they run out of time to accept an offer from a university that is lower on their priority list, before they know the decision from a university that they value more. The college admissions process is thus extremely stressful and unpredictable to both students and their parents. Universities, on the other hand, usually receive far more applications than their capacity. They consider various factors in making their decision, with each university using its own process and timelines. A university typically relies on a weighted set of performance indicators to aid the decision making process. These performance indicators and the associated weights for them are often based on a best guess approach relying mostly on past experience. However, since not all admission offers are accepted, universities send out more offer letters than their capacity and hope that the best students accept their offer. Figure 1 shows a step-by-step sequence of events in a typical university admission process. The sequence describes a scenario that results in an unfavorable outcome for both the university and the student. The student applies to two different universities and prefers one over the other (Step A) with priority 1 university on the far right of the figure. Each university evaluates the application, and priority 2 university makes an early offer along with a certain deadline to accept the offer (Step B). The student, uncertain about priority 1 university, accepts the offer from priority 2 university (Step C), possibly committing some funds. At a later date, priority 1 university decides to accept the student (Step D) who may no longer be available. This process typically spans a number of months and is fraught with uncertainty, and results in a lose-lose situation for the priority 1 university and the student. There are two challenges in the admissions process exemplified above: i. The process of identifying the best applicants involves multiple credentials. Given the complex interactions between these credentials, it is not easy to identify a single model that is effective for this selection process. Furthermore, given the competitive nature of university admissions, there are no normative models in the literature. ii. Once the most desirable candidates are identified, the decision to make an offer, and the composition of that offer, are both difficult. Better candidates are likely to be sought by multiple schools, so the university has to trade off the risks of chasing (and still losing) these students versus the better chances of getting the next tier of students. Furthermore, in many universities, some admission decisions and offer may have to be made before all applications are received. We believe that data mining and revenue management techniques can be used effectively to address both these challenges, and thus convert the lose-lose situation into a win-win situation. By applying these techniques, universities can methodically score an applicant and be able to respond almost immediately with an offer, mitigating prolonged uncertainty while increasing transparency. We demonstrate the approach using a simplistic admissions process. Although individual universities may have additional, and possibly subjective features in their admissions processes, we believe that our approach could be adapted to the specific processes of many universities and colleges.


Marketing Science | 2009

Optimal Category Pricing with Endogenous Store Traffic

Edward J. Fox; Steven Postrel; John Semple

We propose a dynamic programming framework for retailers of frequently purchased consumer goods in which the prices affect both the profit per visit in the current period and the number of visitors (i.e., store traffic) in future periods. We show that optimal category prices in the infinite-horizon problem also maximize the closed form sum of a geometric series, allowing us to derive meaningful analytical results. Modeling the linkage between category prices and future store traffic fundamentally changes optimal pricing policy. Optimal pricing must balance current profits against future traffic; under general conditions, optimal long-run prices are uniformly lower across all categories than those that maximize current profits. This result explains the empirical generalization that category demand in grocery stores is inelastic. Parameterizing profit per visit and store traffic reveals that, as future traffic becomes more sensitive to price, retailers should increasingly lower current prices and sacrifice current profits. We also determine how the burden of drawing future traffic to the store should be distributed across categories; this is the foundation for a new taxonomy of category roles.


SMU Cox: Marketing (Topic) | 2008

A General Approximation to the Distribution of Count Data with Applications to Inventory Modeling

Edward J. Fox; Bezalel Gavish; John Semple

We derive a general approximation to the distribution of count data based on the first two moments of the underlying interarrival distribution. The result is a variant of the Birnbaum-Saunders (BISA) distribution. This distribution behaves like the lognormal in several respects; however, we show that the BISA can fit both simulated and empirical data better than the lognormal and that the BISA possesses additive properties that the lognormal does not. This results in computational advantages for operational models that involve summing random variables. Moreover, although the BISA can be fit to count data (as we demonstrate empirically), it can also be fit directly to transaction-level interarrival data. This provides a simple, practical way to sidestep distributional fitting problems that arise from count data that is censored by inventory stockouts. In numerical experiments involving dynamic inventory models, we compare the BISA distribution to other commonly used distributions and show how it leads to better managerial decisions.

Collaboration


Dive into the John Semple's collaboration.

Top Co-Authors

Avatar

Edward J. Fox

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aydin Alptekinoglu

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Joseph Sarkis

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ahmed Hassan

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Amit Basu

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Aruna Apte

Southern Methodist University

View shared research outputs
Top Co-Authors

Avatar

Bezalel Gavish

Southern Methodist University

View shared research outputs
Researchain Logo
Decentralizing Knowledge