Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alireza Nazemi is active.

Publication


Featured researches published by Alireza Nazemi.


Applied Mathematics and Computation | 2006

Neural network models and its application for solving linear and quadratic programming problems

Sohrab Effati; Alireza Nazemi

In this paper we consider two recurrent neural network model for solving linear and quadratic programming problems. The first model is derived from an unconstraint minimization reformulation of the program. The second model directly is obtained of optimality condition for an optimization problem. By applying the energy function and the duality gap, we will compare the convergence these models. We also explore the existence and the convergence of the trajectory and stability properties for the neural networks models. Finally, in some numerical examples, the effectiveness of the methods is shown.


Applied Mathematics and Computation | 2007

Application of projection neural network in solving convex programming problems

Sohrab Effati; Abbas Ghomashi; Alireza Nazemi

In this paper we present that solution of convex programming problems is equivalent with solution of projection formulation, then we introduce neural network models for solving projection formulation and analysis stability conditions and convergence. Simulation shows that the introduced neural network is effective in solving convex programming problems.


Journal of Computational and Applied Mathematics | 2011

A dynamical model for solving degenerate quadratic minimax problems with constraints

Alireza Nazemi

This paper presents a new neural network model for solving degenerate quadratic minimax (DQM) problems. On the basis of the saddle point theorem, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle, the equilibrium point of the proposed network is proved to be equivalent to the optimal solution of the DQM problems. It is also shown that the proposed network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.


Applied Mathematics and Computation | 2005

A new method for solving a system of the nonlinear equations

Sohrab Effati; Alireza Nazemi

In this paper we use measure theory in the discrete case to solve a wide range of the nonlinear equations systems. First, by defining an error function, we transform the problem to an optimal control problem in discrete case. The new problem is modified into one consisting of the minimization of a linear functional over a set of Radon measures; the optimal measure then is approximated by a finite combination of atomic measures and the problem converted approximately to a finite-dimensional nonlinear programming. Finally, we obtain an approximate solution for the original problem, furthermore, we obtain the path from the initial point up to the approximate solution.


Journal of Computational and Applied Mathematics | 2012

A capable neural network model for solving the maximum flow problem

Alireza Nazemi; Farahnaz Omidi

This paper presents an optimization technique for solving a maximum flow problem arising in widespread applications in a variety of settings. On the basis of the Karush-Kuhn-Tucker (KKT) optimality conditions, a neural network model is constructed. The equilibrium point of the proposed neural network is then proved to be equivalent to the optimal solution of the original problem. It is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the maximum flow problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.


Engineering Applications of Artificial Intelligence | 2014

A neural network model for solving convex quadratic programming problems with some applications

Alireza Nazemi

Abstract This paper presents a capable neural network for solving strictly convex quadratic programming (SCQP) problems with general linear constraints. The proposed neural network model is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. A block diagram of the proposed model is also given. Several applicable examples further show the correctness of the results and the good performance of the model.


Engineering Applications of Artificial Intelligence | 2013

Solving general convex nonlinear optimization problems by an efficient neurodynamic model

Alireza Nazemi

In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve general convex nonlinear programming (GCNLP) problems. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the GCNLP problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.


Ima Journal of Mathematical Control and Information | 2016

Solving fractional optimal control problems with fixed or free final states by Haar wavelet collocation method

Soleiman Hosseinpour; Alireza Nazemi

A numerical method using Haar wavelets for solving fractional optimal control problems (FOCPs) is studied. The fractional derivative in these problems is in the Caputo sense. The operational matrix of fractional Riemann–Liouville integration and the direct collocation method are considered. The proposed technique is applied to transform the state and control variables into non-linear programming (NLP) parameters at collocation points. An NLP solver can then be used to solve FOCPs. Illustrative examples are included to demonstrate the validity and applicability of the proposed method.


Neurocomputing | 2015

A neural network method for solving support vector classification problems

Alireza Nazemi; Mehran Dehghan

This paper presents a recurrent neural network to support vector machine (SVM) learning in pattern classification arising widespread applications in a variety of setting. The SVM learning problem in classification is first converted into an equivalent quadratic programming (QP) formulation, and then a recurrent neural network for SVM learning is proposed. The proposed neural network is guaranteed to obtain the optimal solution of support vector classification. It is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the QP problem. Several illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper.


Applied Mathematics and Computation | 2007

An applicable method for solving the shortest path problems

M. Zamirian; Mohammad Hadi Farahi; Alireza Nazemi

A theorem of Hardy, Littlewood, and Polya, first time is used to find the variational form of the well known shortest path problem, and as a consequence of that theorem, one can find the shortest path problem via quadratic programming. In this paper, we use measure theory to solve this problem. The shortest path problem can be written as an optimal control problem. Then the resulting distributed control problem is expressed in measure theoretical form, in fact an infinite dimensional linear programming problem. The optimal measure representing the shortest path problem is approximated by the solution of a finite dimensional linear programming problem.

Collaboration


Dive into the Alireza Nazemi's collaboration.

Researchain Logo
Decentralizing Knowledge