Andrew C. Trapp
Worcester Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew C. Trapp.
European Journal of Operational Research | 2009
Oleg A. Prokopyev; Sergiy Butenko; Andrew C. Trapp
This paper deals with the problems of checking strong solvability and feasibility of linear interval equations, checking weak solvability of linear interval equations and inequalities, and finding control solutions of linear interval equations. These problems are known to be NP-hard. We use some recently developed characterizations in combination with classical arguments to show that these problems can be equivalently stated as optimization tasks and provide the corresponding linear mixed 0-1 programming formulations.
Operations Research | 2013
Andrew C. Trapp; Oleg A. Prokopyev; Andrew J. Schaefer
We propose a level-set approach to characterize the value function of a pure linear integer program with inequality constraints. We study theoretical properties of our characterization and show how they can be exploited to optimize a class of stochastic integer programs through a value function reformulation. Specifically, we develop algorithmic approaches that solve two-stage multidimensional knapsack problems with random budgets, yielding encouraging computational results.
Informs Journal on Computing | 2010
Andrew C. Trapp; Oleg A. Prokopyev
In this paper we consider the order-preserving submatrix (OPSM) problem. This problem is known to be NP-hard. Although in recent years some heuristic methods have been presented to find OPSMs, they lack the guarantee of optimality. We present exact solution approaches based on linear mixed 0--1 programming formulations and develop algorithmic enhancements to aid in solvability. Encouraging computational results are reported both for synthetic and real biological data. In addition, we discuss theoretical computational complexity issues related to finding fixed patterns in matrices.
Journal of Combinatorial Optimization | 2010
Andrew C. Trapp; Oleg A. Prokopyev; Stanislav Busygin
Biclustering is a data mining technique used to simultaneously partition the set of samples and the set of their attributes (features) into subsets (clusters). Samples and features clustered together are supposed to have a high relevance to each other. In this paper we provide a new mathematical programming formulation for unsupervised biclustering. The proposed model involves the solution of a fractional 0–1 programming problem. A linear-mixed 0–1 reformulation as well as two heuristic-based approaches are developed. Encouraging computational results on clustering real DNA microarray data sets are presented. In addition, we also discuss theoretical computational complexity issues related to biclustering.
Bioinformatics | 2010
Nuri A. Temiz; Andrew C. Trapp; Oleg A. Prokopyev; Carlos J. Camacho
Motivation: A major limitation in modeling protein interactions is the difficulty of assessing the over-fitting of the training set. Recently, an experimentally based approach that integrates crystallographic information of C2H2 zinc finger–DNA complexes with binding data from 11 mutants, 7 from EGR finger I, was used to define an improved interaction code (no optimization). Here, we present a novel mixed integer programming (MIP)-based method that transforms this type of data into an optimized code, demonstrating both the advantages of the mathematical formulation to minimize over- and under-fitting and the robustness of the underlying physical parameters mapped by the code. Results: Based on the structural models of feasible interaction networks for 35 mutants of EGR–DNA complexes, the MIP method minimizes the cumulative binding energy over all complexes for a general set of fundamental protein–DNA interactions. To guard against over-fitting, we use the scalability of the method to probe against the elimination of related interactions. From an initial set of 12 parameters (six hydrogen bonds, five desolvation penalties and a water factor), we proceed to eliminate five of them with only a marginal reduction of the correlation coefficient to 0.9983. Further reduction of parameters negatively impacts the performance of the code (under-fitting). Besides accurately predicting the change in binding affinity of validation sets, the code identifies possible context-dependent effects in the definition of the interaction networks. Yet, the approach of constraining predictions to within a pre-selected set of interactions limits the impact of these potential errors to related low-affinity complexes. Contact: [email protected]; [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
European Journal of Operational Research | 2017
Renata Konrad; Andrew C. Trapp; Timothy Palmbach; Jeffrey S. Blom
Human trafficking is a complex transnational problem for society and the global economy. While researchers have studied this topic in a variety of contexts, including the criminology, sociology, and clinical domains, there has been little coverage in the operations research (OR) and analytics community. This paper highlights how techniques from OR and analytics can address the growing issue of human trafficking. We describe some of the unique concerns, problems, and challenges of human trafficking in relation to analytical techniques; subsequently, we demonstrate a variety of ways that OR and analytics can be applied in the human trafficking domain.
international conference on universal access in human-computer interaction | 2016
Mina Shojaeizadeh; Soussan Djamasbi; Andrew C. Trapp
The use of eye movements to study cognitive effort is becoming increasingly important in HCI research. Eye movements are natural and frequently occurring human behavior. In particular fixations represent attention; people look at something when they want to acquire information from it. Users also tend to cluster their attention on informative regions of a visual stimulus. Thus, fixation duration is often used to measure attention and cognitive processing. Additionally, parameters such as pupil dilation and fixation durations have also been shown to be representative of information processing. In this study we argue that fixation density, defined as the number of gaze points divided by the total area of a fixation event, can serve as a proxy for information processing. As such, fixation density has a significant relationship with pupil data and fixation duration, which have been shown to be representative of cognitive effort and information processing.
Iie Transactions | 2015
Andrew C. Trapp; Renata Konrad
Typical output from an optimization solver is a single optimal solution. There are contexts, however, where a set of high-quality and diverse solutions may be beneficial; for example, problems involving imperfect information or those for which the structure of high-quality solution vectors can reveal meaningful insights. In view of this, we discuss a novel method to obtain multiple diverse optima / near optima to pure binary (0–1) integer programs, employing fractional programming techniques to manage these typically competing goals. Specifically, we develop a general approach that makes use of Dinkelbach’s algorithm to sequentially generate solutions that evaluate well with respect to both (i) individual performance and as a whole and (ii) mutual variety. We assess the performance of our approach on a number of MIPLIB test instances from the literature. Using two diversity metrics, computational results show that our method provides an efficient way to optimize the fractional objective while sequentially generating multiple high-quality and diverse solutions.
Discrete Optimization | 2015
Andrew C. Trapp; Oleg A. Prokopyev
We consider a class of two-stage stochastic integer programs and their equivalent reformulation that uses the integer programming value functions in both stages. One class of solution methods in the literature is based on the idea of pre-computing and storing exact value functions, and then exploiting this information within a global branch-and-bound framework. Such methods are known to be very sensitive to the magnitude of feasible right-hand side values. In this note we propose a simple constraint-aggregation based approach that potentially alleviates this limitation.
advances in geographic information systems | 2016
Bo Lyu; Shijian Li; Yanhua Li; Jie Fu; Andrew C. Trapp; Haiyong Xie; Yong Liao
The fast pace of global urbanization is drastically changing the population distributions over the world, which leads to significant changes in geographical population densities. Such changes in turn alter the underlying geographical power demand over time, and drive power substations to become over-supplied (demand << capacity) or under-supplied (demand ≈ capacity). In this paper, we make the first attempt to investigate the problem of power substation-user assignment by analyzing large-scale power grid data. We develop a Scalable Power User Assignment (SPUA) framework, that takes large-scale spatial power user/substation distribution data and temporal user power consumption data as input, and assigns users to substations, in a manner that minimizes the maximum substation utilization among all substations. To evaluate the performance of our SPUA framework, we conduct evaluations on real power consumption data and user/substation location data collected from a province in China for 35 days in 2015. The evaluation results demonstrate that our SPUA framework can achieve a 20%--65% reduction on the maximum substation utilization, and 2 to 3.7 times reduction on total transmission loss over other baseline methods.