Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masakazu Muramatsu is active.

Publication


Featured researches published by Masakazu Muramatsu.


Siam Journal on Optimization | 2006

Sums of Squares and Semidefinite Program Relaxations for Polynomial Optimization Problems with Structured Sparsity

Hayato Waki; Sunyoung Kim; Masakazu Kojima; Masakazu Muramatsu

Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of the supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite program (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations.


ACM Transactions on Mathematical Software | 2008

Algorithm 883: SparsePOP---A Sparse Semidefinite Programming Relaxation of Polynomial Optimization Problems

Hayato Waki; Sunyoung Kim; Masakazu Kojima; Masakazu Muramatsu; Hiroshi Sugimoto

SparsePOP is a Matlab implementation of the sparse semidefinite programming (SDP) relaxation method for approximating a global optimal solution of a polynomial optimization problem (POP) proposed by Waki et al. [2006]. The sparse SDP relaxation exploits a sparse structure of polynomials in POPs when applying “a hierarchy of LMI relaxations of increasing dimensions” Lasserre [2006]. The efficiency of SparsePOP to approximate optimal solutions of POPs is thus increased, and larger-scale POPs can be handled.


Siam Journal on Optimization | 1995

Global Convergence of a Long-Step Affine Scaling Algorithm for Degenerate Linear Programming Problems

Takashi Tsuchiya; Masakazu Muramatsu

In this paper we present new global convergence results on a long-step affine scaling algorithm obtained by means of the local Karmarkar potential functions. This development was triggered by Dikin’s interesting result on the convergence of the dual estimates associated with a long-step afflne scaling algorithm for homogeneous LP problems with unique optimal solutions. Without requiring any assumption on degeneracy, we show that moving a fixed proportion


Journal of Optimization Theory and Applications | 2002

On a commutative class of search directions for linear programming over symmetric cones

Masakazu Muramatsu

\lambda


Computational Optimization and Applications | 2009

A note on sparse SOS and SDP relaxations for polynomial optimization problems over symmetric cones

Masakazu Kojima; Masakazu Muramatsu

up to two-thirds of the way to the boundary at each iteration ensures convergence of the iterates to an interior point of the optimal face as well as the dual estimates to the analytic center of the dual optimal face, where the asymptotic reduction rate of the value of the objective function is


Journal of Optimization Theory and Applications | 2013

Facial Reduction Algorithms for Conic Optimization Problems

Hayato Waki; Masakazu Muramatsu

1 - \lambda


Mathematical Programming | 2007

An Extension of Sums of Squares Relaxations to Polynomial Optimization Problems Over Symmetric Cones

Masakazu Kojima; Masakazu Muramatsu

. We also give an example showing that this result is tight to obtain convergence of the dual estimates to the analytic center of the dual optimal face.


Applied Intelligence | 2005

An Efficient Support Vector Machine Learning Method with Second-Order Cone Programming for Large-Scale Problems

Rameswar Debnath; Masakazu Muramatsu; Haruhisa Takahashi

The commutative class of search directions for semidefinite programming was first proposed by Monteiro and Zhang (Ref. 1). In this paper, we investigate the corresponding class of search directions for linear programming over symmetric cones, which is a class of convex optimization problems including linear programming, second-order cone programming, and semidefinite programming as special cases. Complexity results are established for short-step, semilong-step, and long-step algorithms. Then, we propose a subclass of the commutative class for which we can prove polynomial complexities of the interior-point method using semilong steps and long steps. This subclass still contains the Nesterov–Todd direction and the Helmberg–Rendl–Vanderbei–Wolkowicz/Kojima–Shindoh–Hara/Monteiro direction. An explicit formula to calculate any member of the class is also given.


Computational Optimization and Applications | 2012

Strange behaviors of interior-point methods for solving semidefinite programming problems in polynomial optimization

Hayato Waki; Maho Nakata; Masakazu Muramatsu

Abstract This short note extends the sparse SOS (sum of squares) and SDP (semidefinite programming) relaxation proposed by Waki, Kim, Kojima and Muramatsu for normal POPs (polynomial optimization problems) to POPs over symmetric cones, and establishes its theoretical convergence based on the recent convergence result by Lasserre on the sparse SOS and SDP relaxation for normal POPs. A numerical example is also given to exhibit its high potential.


Mathematical Programming | 1998

Affine scaling algorithm fails for semidefinite programming

Masakazu Muramatsu

In the conic optimization problems, it is well-known that a positive duality gap may occur, and that solving such a problem is numerically difficult or unstable. For such a case, we propose a facial reduction algorithm to find a primal–dual pair of conic optimization problems having the zero duality gap and the optimal value equal to one of the original primal or dual problems. The conic expansion approach is also known as a method to find such a primal–dual pair, and in this paper we clarify the relationship between our facial reduction algorithm and the conic expansion approach. Our analysis shows that, although they can be regarded as dual to each other, our facial reduction algorithm has ability to produce a finer sequence of faces of the cone including the feasible region. A simple proof of the convergence of our facial reduction algorithm for the conic optimization is presented. We also observe that our facial reduction algorithm has a practical impact by showing numerical experiments for graph partition problems; our facial reduction algorithm in fact enhances the numerical stability in those problems.

Collaboration


Dive into the Masakazu Muramatsu's collaboration.

Top Co-Authors

Avatar

Takashi Tsuchiya

National Graduate Institute for Policy Studies

View shared research outputs
Top Co-Authors

Avatar

Hayato Waki

Ewha Womans University

View shared research outputs
Top Co-Authors

Avatar

Masakazu Kojima

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kunihito Hoki

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Haruhisa Takahashi

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hayato Waki

Ewha Womans University

View shared research outputs
Researchain Logo
Decentralizing Knowledge