Jose Israel Rodriguez
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jose Israel Rodriguez.
international symposium on symbolic and algebraic computation | 2014
Elizabeth Gross; Jose Israel Rodriguez
Given a statistical model, the maximum likelihood degree is the number of complex solutions to the likelihood equations for generic data. We consider discrete algebraic statistical models and study the solutions to the likelihood equations when the data contain zeros and are no longer generic. Focusing on sampling and model zeros, we show that, in these cases, the solutions to the likelihood equations are contained in a previously studied variety, the likelihood correspondence. The number of these solutions give a lower bound on the ML degree, and the problem of finding critical points to the likelihood function can be partitioned into smaller and computationally easier problems involving sampling and model zeros. We use this technique to compute a lower bound on the ML degree for 2 x 2 x 2 x 2 tensors of border rank ≤ 2 and 3 x n tables of rank ≤ 2 for n = 11, 12, 13, 14, the first four values of n for which the ML degree was previously unknown.
Foundations of Computational Mathematics | 2018
Jonathan D. Hauenstein; Jose Israel Rodriguez; Frank Sottile
The Galois/monodromy group of a family of geometric problems or equations is a subtle invariant that encodes the structure of the solutions. We give numerical methods to compute the Galois group and study it when it is not the full symmetric group. One algorithm computes generators, while the other studies its structure as a permutation group. We illustrate these algorithms with examples using a Macaulay2 package we are developing that relies upon Bertini to perform monodromy computations.
international symposium on symbolic and algebraic computation | 2015
Jose Israel Rodriguez; Xiaoxian Tang
Maximum likelihood estimation (MLE) is a fundamental computational problem in statistics. The problem is to maximize the likelihood function with respect to given data on a statistical model. An algebraic approach to this problem is to solve a very structured parameterized polynomial system called likelihood equations. For general choices of data, the number of complex solutions to the likelihood equations is finite and called the ML-degree of the model. The only solutions to the likelihood equations that are statistically meaningful are the real/positive solutions. However, the number of real/positive solutions is not characterized by the ML-degree. We use discriminants to classify data according to the number of real/positive solutions of the likelihood equations. We call these discriminants data-discriminants (DD). We develop a probabilistic algorithm for computing DDs. Experimental results show that, for the benchmarks we have tried, the probabilistic algorithm is more efficient than the standard elimination algorithm. Based on the computational results, we discuss the real root classification problem for the 3 by 3 symmetric matrix~model.
symbolic numeric computation | 2014
Jose Israel Rodriguez
Maximum likelihood estimation (MLE) is a fundamental computational problem in statistics. In this paper, MLE for statistical models with discrete data is studied from an algebraic statistics viewpoint. A reformulation of the MLE problem in terms of dual varieties and conormal varieties will be given. With this description, we define the dual likelihood equations and the dual MLE problem. We show that solving the dual MLE problem yields solutions to the MLE problem, and that we can solve the dual MLE problem even if we do not have the defining equations of the model itself.
arXiv: Algebraic Geometry | 2017
Jose Israel Rodriguez; Botong Wang
The maximum likelihood degree (ML degree) measures the algebraic complexity of a fundamental optimization problem in statistics: maximum likelihood estimation. In this problem, one maximizes the likelihood function over a statistical model. The ML degree of a model is an upper bound to the number of local extrema of the likelihood function and can be expressed as a weighted sum of Euler characteristics. The independence model (i.e. rank one matrices over the probability simplex) is well known to have an ML degree of one, meaning their is a unique local maxima of the likelihood function. However, for mixtures of independence models (i.e. rank two matrices over the probability simplex), it was an open question as to how the ML degree behaved. In this paper, we use Euler characteristics to prove an outstanding conjecture by Hauenstein, the first author, and Sturmfels; we give recursions and closed form expressions for the ML degree of mixtures of independence models.
Journal of Symbolic Computation | 2018
Carlos Améndola; Nathan Bliss; Isaac Burke; Courtney R. Gibbons; Martin Helmer; Serkan Hosten; Evan D. Nash; Jose Israel Rodriguez; Daniel Smolkin
Abstract We study the maximum likelihood (ML) degree of toric varieties, known as discrete exponential models in statistics. By introducing scaling coefficients to the monomial parameterization of the toric variety, one can change the ML degree. We show that the ML degree is equal to the degree of the toric variety for generic scalings, while it drops if and only if the scaling vector is in the locus of the principal A-determinant. We also illustrate how to compute the ML estimate of a toric variety numerically via homotopy continuation from a scaled toric variety with low ML degree. Throughout, we include examples motivated by algebraic geometry and statistics. We compute the ML degree of rational normal scrolls and a large class of Veronese-type varieties. In addition, we investigate the ML degree of scaled Segre varieties, hierarchical log-linear models, and graphical models.
Journal of Symbolic Computation | 2015
Jose Israel Rodriguez
We provide formulas and develop algorithms for computing the excess numbers of an ideal. The solution for monomial ideals is given by the mixed volumes of polytopes. These results enable us to design numerical algebraic geometry homotopies to compute excess numbers of any ideal.
Journal of Symbolic Computation | 2017
Emil Horobeź; Jose Israel Rodriguez
For general data, the number of complex solutions to the likelihood equations is constant and this number is called the (maximum likelihood) ML-degree of the model. In this article, we describe the special locus of data for which the likelihood equations have a solution in the models singular locus.
international congress on mathematical software | 2018
Jose Israel Rodriguez
Macpherson defined Chern-Schwartz-Macpherson (CSM) classes by introducing the (local) Euler obstruction function, which is an integer valued function on the variety that is constant on each stratum of a Whitney stratification of an algebraic variety. By understanding the Euler obstruction function, one gains insights about a singular algebraic variety. It was recently shown by the author and B. Wang, how to compute these functions using maximum likelihood degrees. This paper discusses a symbolic and a numerical implementation of algorithms to compute the Euler obstruction at a point. Macaulay2 and Bertini are used in the implementations.
Journal of Symbolic Computation | 2017
Jose Israel Rodriguez; Xiaoxian Tang
An algebraic approach to the maximum likelihood estimation problem is to solve a very structured parameterized polynomial system called likelihood equations that have finitely many complex (real or non-real) solutions. The only solutions that are statistically meaningful are the real solutions with positive coordinates. In order to classify the parameters (data) according to the number of real/positive solutions, we study how to efficiently compute the discriminants, say data-discriminants (DD), of the likelihood equations. We develop a probabilistic algorithm with three different strategies for computing DDs. Our implemented probabilistic algorithm based on Maple and FGb is more efficient than our previous version presented in ISSAC2015, and is also more efficient than the standard elimination for larger benchmarks. By applying RAGlib to a DD we compute, we give the real root classification of 3 by 3 symmetric matrix model.