Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan Vlček is active.

Publication


Featured researches published by Jan Vlček.


Numerical Linear Algebra With Applications | 1998

Indefinitely preconditioned inexact Newton method for large sparse equality constrained non-linear programming problems

Ladislav Lukšan; Jan Vlček

An inexact Newton algorithm for large sparse equality constrained non-linear programming problems is proposed. This algorithm is based on an indefinitely preconditioned smoothed conjugate gradient method applied to the linear KKT system and uses a simple augmented Lagrangian merit function for Armijo type stepsize selection. Most attention is devoted to the termination of the CG method, guaranteeing sufficient descent in every iteration and decreasing the number of required CG iterations, and especially, to the choice of a suitable preconditioner. We investigate four preconditioners, which have 2 × 2 block structure, and prove theoretically their good properties. The efficiency of the inexact Newton algorithm, together with a comparison of various preconditioners and strategies, is demonstrated by using a large collection of test problems.


Mathematical Programming | 1998

A bundle-Newton method for nonsmooth unconstrained minimization

Ladislav Lukšan; Jan Vlček

An algorithm based on a combination of the polyhedral and quadratic approximation is given for finding stationary points for unconstrained minimization problems with locally Lips-chitz problem functions that are not necessarily convex or differentiable. Global convergence of the algorithm is established. Under additional assumptions, it is shown that the algorithm generates Newton iterations and that the convergence is superlinear. Some encouraging numerical experience is reported.


Journal of Physics D | 2010

Growth and characterization of nanodiamond layers prepared using the plasma-enhanced linear antennas microwave CVD system

František Fendrych; Andrew Taylor; Ladislav Peksa; Irena Kratochvílová; Jan Vlček; Vladimira Rezacova; Václav Petrák; Zdenek Kluiber; Ladislav Fekete; M. Liehr; Milos Nesladek

Industrial applications of plasma-enhanced chemical vapour deposition (CVD) diamond grown on large area substrates, 3D shapes, at low substrate temperatures and on standard engineering substrate materials require novel plasma concepts. Based on the pioneering work of the group at AIST in Japan, the high-density coaxial delivery type of plasmas has been explored (Tsugawa et al 2006 New Diamond Front. Carbon Technol. 16 337–46). However, an important challenge is to obtain commercially interesting growth rates at very low substrate temperatures. In this work we introduce the concept of novel linear antenna sources, designed at Leybold Optics Dresden, using high-frequency pulsed MW discharge with a high plasma density. This type of pulse discharges leads to the preparation of nanocrystalline diamond (NCD) thin films, compared with ultra-NCD thin films prepared in (Tsugawa et al 2006 New Diamond Front. Carbon Technol. 16 337–46). We present optical emission spectroscopy data for the CH4–CO2–H2 gas chemistry and we discuss the basic properties of the NCD films grown.


ACM Transactions on Mathematical Software | 2001

Algorithm 811: NDA: algorithms for nondifferentiable optimization

Ladislav Lukšan; Jan Vlček

We present four basic Fortran subroutines for nondifferentiable optimization with simple bounds and general linear constraints. Subroutine PMIN, intended for minimax optimization, is based on a sequential quadratic programming variable metric algorithm. Subroutines PBUN and PNEW, intended for general nonsmooth problems, are based on bundle-type methods. Subroutine PVAR is based on special nonsmooth variable metric methods. Besides the description of methods and codes, we propose computational experiments which demonstrate the efficiency of this approach.


Journal of Global Optimization | 1999

Comparing Nonsmooth Nonconvex Bundle Methods in Solving Hemivariational Inequalities

Marko M. Mäkelä; Markku Miettinen; Ladislav Lukšan; Jan Vlček

Hemivariational inequalities can be considered as a generalization of variational inequalities. Their origin is in nonsmooth mechanics of solid, especially in nonmonotone contact problems. The solution of a hemivariational inequality proves to be a substationary point of some functional, and thus can be found by the nonsmooth and nonconvex optimization methods. We consider two type of bundle methods in order to solve hemivariational inequalities numerically: proximal bundle and bundle-Newton methods. Proximal bundle method is based on first order polyhedral approximation of the locally Lipschitz continuous objective function. To obtain better convergence rate bundle-Newton method contains also some second order information of the objective function in the form of approximate Hessian. Since the optimization problem arising in the hemivariational inequalities has a dominated quadratic part the second order method should be a good choice. The main question in the functioning of the methods is how remarkable is the advantage of the possible better convergence rate of bundle-Newton method when compared to the increased calculation demand.


Optimization Methods & Software | 1998

Computational experience with globally convergent descent methods for large sparse systems of nonlinear equations

Ladislav Lukšan; Jan Vlček

This paper is devoted to globally convergent Armijo-type descent methods for solving large sparse systems of nonlinear equations. These methods include the discrete Newtcin method and a broad class of Newton-like methods based on various approximations of the Jacobian matrix. We propose a general theory of global convergence together with a robust algorithm including a special restarting strategy. This algorithm is based cfn the preconditioned smoothed CGS method for solving nonsymmetric systems of linejtr equations. After reviewing 12 particular Newton-like methods, we propose results of extensive computational experiments. These results demonstrate high efficiency of tip proposed algorithm


Optimization Methods & Software | 2005

Interior point methods for large-scale nonlinear programming

Ladislav Lukšan; Ctirad Matonoha; Jan Vlček

In this paper we describe an algorithm for solving nonlinear nonconvex programming problems, which is based on the interior point approach. The main theoretical results concern direction determination and step-length selection. We split inequality constraints into active and inactive parts to overcome problems with instability. Inactive constraints are eliminated directly, whereas active constraints are used for defining a symmetric indefinite linear system. Inexact solution of this system is obtained iteratively using indefinitely preconditioned conjugate gradient method. Theorems confirming efficiency of the indefinite preconditioner are introduced. Furthermore, a new merit function is defined and a filter principle is used for step-length selection. The algorithm was implemented in the interactive system for universal functional optimization UFO. Results of numerical experiments are reported.


ACM Transactions on Mathematical Software | 2009

Algorithm 896: LSA: Algorithms for large-scale optimization

Ladislav Lukšan; Ctirad Matonoha; Jan Vlček

We present 14 basic Fortran subroutines for large-scale unconstrained and box constrained optimization and large-scale systems of nonlinear equations. Subroutines PLIS and PLIP, intended for dense general optimization problems, are based on limited-memory variable metric methods. Subroutine PNET, also intended for dense general optimization problems, is based on an inexact truncated Newton method. Subroutines PNED and PNEC, intended for sparse general optimization problems, are based on modifications of the discrete Newton method. Subroutines PSED and PSEC, intended for partially separable optimization problems, are based on partitioned variable metric updates. Subroutine PSEN, intended for nonsmooth partially separable optimization problems, is based on partitioned variable metric updates and on an aggregation of subgradients. Subroutines PGAD and PGAC, intended for sparse nonlinear least-squares problems, are based on modifications and corrections of the Gauss-Newton method. Subroutine PMAX, intended for minimization of a maximum value (minimax), is based on the primal line-search interior-point method. Subroutine PSUM, intended for minimization of a sum of absolute values, is based on the primal trust-region interior-point method. Subroutines PEQN and PEQL, intended for sparse systems of nonlinear equations, are based on the discrete Newton method and the inverse column-update quasi-Newton method, respectively. Besides the description of methods and codes, we propose computational experiments which demonstrate the efficiency of the proposed algorithms.


Optimization Methods & Software | 2007

Trust-region interior-point method for large sparse l1 optimization

Ladislav Lukšan; Ctirad Matonoha; Jan Vlček

In this article, we propose an interior-point method for large sparse l 1 optimization. After a short introduction, the complete algorithm is introduced and some implementation details are given. We prove that this algorithm is globally convergent under standard mild assumptions. Thus, relatively difficult l 1 optimization problems can be solved successfully. The results of computational experiments given in this article confirm efficiency and robustness of the proposed method.


Applied Mathematics and Computation | 2012

A conjugate directions approach to improve the limited-memory BFGS method

Jan Vlček; Ladislav Lukšan

Abstract Simple modifications of the limited-memory BFGS method (L-BFGS) for large scale unconstrained optimization are considered, which consist in corrections (derived from the idea of conjugate directions) of the used difference vectors, utilizing information from the preceding iteration. For quadratic objective functions, the improvement of convergence is the best one in some sense and all stored difference vectors are conjugate for unit stepsizes. Global convergence of the algorithm is established for convex sufficiently smooth functions. Numerical experiments indicate that the new method often improves the L-BFGS method significantly.

Collaboration


Dive into the Jan Vlček's collaboration.

Top Co-Authors

Avatar

Ladislav Lukšan

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Ctirad Matonoha

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

František Fendrych

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Andrew Taylor

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Ladislav Fekete

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Irena Kratochvílová

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eva Marešová

Academy of Sciences of the Czech Republic

View shared research outputs
Top Co-Authors

Avatar

Jiří Bulíř

Academy of Sciences of the Czech Republic

View shared research outputs
Researchain Logo
Decentralizing Knowledge