Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erick Moreno-Centeno is active.

Publication


Featured researches published by Erick Moreno-Centeno.


European Journal of Operational Research | 2013

A More Efficient Algorithm for Convex Nonparametric Least Squares

Chia Yen Lee; Andrew L. Johnson; Erick Moreno-Centeno; Timo Kuosmanen

Convex Nonparametric Least Squares (CNLSs) is a nonparametric regression method that does not require a priori specification of the functional form. The CNLS problem is solved by mathematical programming techniques; however, since the CNLS problem size grows quadratically as a function of the number of observations, standard quadratic programming (QP) and Nonlinear Programming (NLP) algorithms are inadequate for handling large samples, and the computational burdens become significant even for relatively small samples. This study proposes a generic algorithm that improves the computational performance in small samples and is able to solve problems that are currently unattainable. A Monte Carlo simulation is performed to evaluate the performance of six variants of the proposed algorithm. These experimental results indicate that the most effective variant can be identified given the sample size and the dimensionality. The computational benefits of the new algorithm are demonstrated by an empirical application that proved insurmountable for the standard QP and NLP algorithms.


IEEE Transactions on Power Systems | 2014

Topology Control for Load Shed Recovery

Adolfo R. Escobedo; Erick Moreno-Centeno; Kory W. Hedman

This paper introduces load shed recovery actions for transmission networks by presenting the dc optimal load shed recovery with transmission switching model (DCOLSR-TS). The model seeks to reduce the amount of load shed, which may result due to transmission line and/or generator contingencies, by modifying the bulk power system topology. Since solving DCOLSR-TS is computationally difficult, the current work also develops a heuristic (MIP-H), which improves the system topology while specifying the required sequence of switching operations. Experimental results on a list of N-1 and N-2 critical contingencies of the IEEE 118-bus test case demonstrate the advantages of utilizing MIP-H for both online load shed recovery and recurring contingency-response analysis. This is reinforced by the introduction of a parallelized version of the heuristic (Par-MIP-H), which solves the list of critical contingencies close to 5x faster than MIP-H with 8 cores and up to 14x faster with increased computational resources. The current work also tests MIP-H on a real-life, large-scale network in order to measure the computational performance of this tool on a real-world implementation.


Operations Research | 2013

The Implicit Hitting Set Approach to Solve Combinatorial Optimization Problems with an Application to Multigenome Alignment

Erick Moreno-Centeno; Richard M. Karp

We develop a novel framework, the implicit hitting set approach , for solving a class of combinatorial optimization problems. The explicit hitting set problem is as follows: given a set U and a family S of subsets of U , find a minimum-cardinality set that intersects (hits) every set in S . In the implicit hitting set problem (IHSP), the family of subsets S is not explicitly listed (its size is, generally, exponential in terms of the size of U ); instead, it is given via a polynomial-time oracle that verifies if a given set H is a hitting set or returns a set in S that is not hit by H . Many NP-hard problems can be straightforwardly formulated as implicit hitting set problems. We show that the implicit hitting set approach is valuable in developing exact and heuristic algorithms for solving this class of combinatorial optimization problems. Specifically, we provide a generic algorithmic strategy, which combines efficient heuristics and exact methods, to solve any IHSP. Given an instance of an IHSP, the proposed algorithmic strategy gives a sequence of feasible solutions and lower bounds on the optimal solution value and ultimately yields an optimal solution. We specialize this algorithmic strategy to solve the multigenome alignment problem and present computational results that illustrate the effectiveness of the implicit hitting set approach.


Operations Research | 2011

Rating Customers According to Their Promptness to Adopt New Products

Dorit S. Hochbaum; Erick Moreno-Centeno; Phillip Yelland; Rodolfo A. Catena

Databases are a significant source of information in organizations and play a major role in managerial decision-making. This study considers how to process commercial data on customer purchasing timing to provide insights on the rate of new product adoption by the companys consumers. Specifically, we show how to use the separation-deviation model (SD-model) to rate customers according to their proclivity for adopting products for a given line of high-tech products. We provide a novel interpretation of the SD-model as a unidimensional scaling technique and show that, in this context, it outperforms several dimension-reduction and scaling techniques. We analyze the results with respect to various dimensions of the customer base and report on the generated insights.


IEEE Transactions on Automation Science and Engineering | 2017

Matching Misaligned Two-Resolution Metrology Data

Yaping Wang; Erick Moreno-Centeno; Yu Ding

Multiresolution metrology devices coexist in today’s manufacturing environment, producing coordinate measurements complementing each other. Typically, the high-resolution (HR) device produces a scarce but accurate data set, whereas the low-resolution (LR) one produces a dense but less accurate data set. Research has shown that combining the two data sets of different resolutions makes better predictions of the geometric features of a manufactured part. A challenge, however, is how to effectively match each HR data point to an LR counterpart that measures approximately the same physical location. A solution to this matching problem appears a prerequisite to a good final prediction. We solved this problem by formulating it as a quadratic integer program, aiming at minimizing the maximum interpoint distance difference among all potential correspondences. Due to the combinatorial nature of the optimization model, solving it to optimality is computationally prohibitive even for a small problem size. We therefore propose a two-stage matching framework capable of solving real-life-sized problems within a reasonable amount of time. This two-stage framework consists of downsampling the full-size problem, solving the downsampled problem to optimality, extending the solution of the downsampled problem to the full-size problem, and refining the solution using iterative local search. Numerical experiments show that the proposed approach outperforms two popular point set registration alternatives, the iterative closest point and coherent point drift methods, using different performance metrics. The numerical results also show that our approach scales much better as the instance size increases, and is robust to the changes in initial misalignment between the two data sets.


SIAM Journal on Matrix Analysis and Applications | 2017

Roundoff-Error-Free Basis Updates of LU Factorizations for the Efficient Validation of Optimality Certificates

Adolfo R. Escobedo; Erick Moreno-Centeno

The roundoff-error-free (REF) LU and Cholesky factorizations, combined with the REF substitution algorithms, allow rational systems of linear equations to be solved exactly and efficiently by worki...


Iie Transactions | 2016

Axiomatic aggregation of incomplete rankings

Erick Moreno-Centeno; Adolfo R. Escobedo

ABSTRACT In many different applications of group decision-making, individual ranking agents or judges are able to rank only a small subset of all available candidates. However, as we argue in this article, the aggregation of these incomplete ordinal rankings into a group consensus has not been adequately addressed. We propose an axiomatic method to aggregate a set of incomplete rankings into a consensus ranking; the method is a generalization of an existing approach to aggregate complete rankings. More specifically, we introduce a set of natural axioms that must be satisfied by a distance between two incomplete rankings; prove the uniqueness and existence of a distance satisfying such axioms; formulate the aggregation of incomplete rankings as an optimization problem; propose and test a specific algorithm to solve a variation of this problem where the consensus ranking does not contain ties; and show that the consensus ranking obtained by our axiomatic approach is more intuitive than the consensus ranking obtained by other approaches.


Informs Journal on Computing | 2015

Roundoff-Error-Free Algorithms for Solving Linear Systems via Cholesky and LU Factorizations

Adolfo R. Escobedo; Erick Moreno-Centeno

LU and Cholesky factorizations are computational tools for efficiently solving linear systems that play a central role in solving linear programs and several other classes of mathematical programs. In many documented cases, however, the roundoff errors accrued during the construction and implementation of these factorizations lead to the misclassification of feasible problems as infeasible and vice versa. Hence, reducing these roundoff errors or eliminating them altogether is imperative to guarantee the correctness of the solutions provided by optimization solvers. To achieve this goal without having to use rational arithmetic, we introduce two roundoff-error-free factorizations that require storing the same number of individual elements and performing a similar number of operations as the traditional LU and Cholesky factorizations. Additionally, we present supplementary roundoff-error-free forward and backward substitution algorithms, thereby providing a complete tool set for solving systems of linear eq...


IEEE Transactions on Automation Science and Engineering | 2013

Hybridization of Bound-and-Decompose and Mixed Integer Feasibility Checking to Measure Redundancy in Structured Linear Systems

Manish Bansal; Yu Ding; Erick Moreno-Centeno

Computing the degree of redundancy for structured linear systems is proven to be NP-hard. A linear system whose model matrix is of size n×p is considered structured if some p row vectors in the model matrix are linearly dependent. Bound-and-decompose and 0-1 mixed integer programming (MIP) are two approaches to compute the degree of redundancy, which were previously proposed and compared in the literature. In this paper, first we present an enhanced version of the bound-and-decompose algorithm, which is substantially (up to 30 times) faster than the original version. We then present a novel hybrid algorithm to measure redundancy in structured linear systems. This algorithm uses a 0-1 mixed integer feasibility checking algorithm embedded within a bound-and-decompose framework. Our computational study indicates that this new hybrid approach significantly outperforms the existing algorithms as well as our enhanced version of bound-and-decompose in several instances. We also perform a computational study that shows matrix density has a significant effect on the runtime of the algorithms.


Medical Physics | 2018

Guided undersampling classification for automated radiation therapy quality assurance of prostate cancer treatment

W. Eric Brown; Kisuk Sung; Dionne M. Aleman; Erick Moreno-Centeno; Thomas G. Purdie; Chris McIntosh

PURPOSE To test the use of well-studied and widely used classification methods alongside newly developed data-filtering techniques specifically designed for imbalanced-data classification in order to demonstrate proof of principle for an automated radiation therapy (RT) quality assurance process on prostate cancer treatment. METHODS A series of acceptable (majority class, n = 61) and erroneous (minority class, n = 12) RT plans as well as a disjoint set of acceptable plans used to develop features (n = 273) were used to develop a dataset for testing. A series of five widely used imbalanced-data classification algorithms were tested with a modularized guided undersampling procedure that includes ensemble-outlier filtering and normalized-cut sampling. RESULTS Hybrid methods including either ensemble-outlier filtering or both filtering and normalized-cut sampling yielded the strongest performance in identifying unacceptable treatment plans. Specifically, five methods demonstrated superior performance in both area under the receiver operating characteristics curve and false positive rate when the true positive rate is equal to one. Furthermore, ensemble-outlier filtering significantly improved results in all but one hybrid method (p < 0.01). Finally, ensemble-outlier filtering methods identified four minority instances that were considered outliers in over 96% of cross-validation iterations. Such instances may be considered distinct planning errors and merit additional inspection, providing potential areas of improvement for the planning process. CONCLUSIONS Traditional imbalanced-data classification methods combined with ensemble-outlier filtering and normalized-cut sampling provide a powerful framework for identifying erroneous RT treatment plans. The proposed methodology yielded strong classification performance and identified problematic instances with high accuracy.

Collaboration


Dive into the Erick Moreno-Centeno's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Ding

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kory W. Hedman

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge