Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joydeep Dutta is active.

Publication


Featured researches published by Joydeep Dutta.


Mathematical Programming | 2012

Is bilevel programming a special case of a mathematical program with complementarity constraints

Stephan Dempe; Joydeep Dutta

Bilevel programming problems are often reformulated using the Karush–Kuhn–Tucker conditions for the lower level problem resulting in a mathematical program with complementarity constraints(MPCC). Clearly, both problems are closely related. But the answer to the question posed is “No” even in the case when the lower level programming problem is a parametric convex optimization problem. This is not obvious and concerns local optimal solutions. We show that global optimal solutions of the MPCC correspond to global optimal solutions of the bilevel problem provided the lower-level problem satisfies the Slater’s constraint qualification. We also show by examples that this correspondence can fail if the Slater’s constraint qualification fails to hold at lower-level. When we consider the local solutions, the relationship between the bilevel problem and its corresponding MPCC is more complicated. We also demonstrate the issues relating to a local minimum through examples.


Optimization | 2007

New necessary optimality conditions in optimistic bilevel programming

Stephan Dempe; Joydeep Dutta; Boris S. Mordukhovich

The article is devoted to the study of the so-called optimistic version of bilevel programming in finite-dimensional spaces. Problems of this type are intrinsically nonsmooth (even for smooth initial data) and can be treated by using appropriate tools of modern variational analysis and generalized differentiation. Considering a basic optimistic model in bilevel programming, we reduce it to a one-level framework of nondifferentiable programs formulated via (nonsmooth) optimal value function of the parametric lower-level problem in the original model. Using advanced formulas for computing basic subgradients of value/marginal functions in variational analysis, we derive new necessary optimality conditions for bilevel programs reflecting significant phenomena that have never been observed earlier. In particular, our optimality conditions for bilevel programs do not depend on the partial derivatives with respect to parameters of the smooth objective function in the parametric lower-level problem. We present efficient implementations of our approach and results obtained for bilevel programs with differentiable, convex, linear, and Lipschitzian functions describing the initial data of the lower-level and upper-level problems. ¶This work is dedicated to the memory of Prof. Dr Alexander Moiseevich Rubinov.


Numerical Functional Analysis and Optimization | 2001

ON APPROXIMATE MINIMA IN VECTOR OPTIMIZATION

Joydeep Dutta; V. Vetrivel

Necessary and sufficient conditions are obtained for the existence of approximate minima in vector optimization problems. The notion of approximate saddle point is introduced and the relation between approximate saddle points and the approximate minima are established.


Mathematical Methods of Operations Research | 2006

Lagrangian conditions for vector optimization in Banach spaces

Joydeep Dutta; Christiane Tammer

We consider vector optimization problems on Banach spaces without convexity assumptions. Under the assumption that the objective function is locally Lipschitz we derive Lagrangian necessary conditions on the basis of Mordukhovich subdifferential and the approximate subdifferential by Ioffe using a non-convex scalarization scheme. Finally, we apply the results for deriving necessary conditions for weakly efficient solutions of non-convex location problems.


congress on evolutionary computation | 2007

Finding trade-off solutions close to KKT points using evolutionary multi-objective optimization

Kalyanmoy Deb; Rahul Tewari; Mayur Dixit; Joydeep Dutta

Despite having a wide-spread applicability of evolutionary optimization procedures over the past few decades, EA researchers still face criticism about the theoretical optimality of obtained solutions. In this paper, we address this issue for problems for which gradients of objectives and constraints can be computed either exactly, or numerically or through subdifferentials. We suggest a systematic procedure of analyzing a representative set of Pareto-optimal solutions for their closeness to satisfying Karush-Kuhn-Tucker (KKT) points, which every Pareto-optimal solution must also satisfy. The procedure involves either a least-square solution or an optimum solution to a set of linear system of equations involving Lagrange multipliers. The procedure is applied to a number of differentiable and non-differentiable test problems and to a highly nonlinear engineering design problem. The results clearly show that EAs are capable of finding solutions close to theoretically optimal solutions in various problems. As a by-product, the error metric suggested in this paper can also be used as a termination condition for an EA application. Hopefully, this study will bring EAs and its research closer to classical optimization studies.


Optimization | 2004

Convexifactors, generalized convexity and vector optimization

Joydeep Dutta; Suresh Chandra

In this article we study a recently introduced notion of non-smooth analysis, namely convexifactors. We study some properties of the convexifactors and introduce two new chain rules. A new notion of non-smooth pseudoconvex function is introduced and its properties are studied in terms of convexifactors. We also present some optimality conditions for vector minimization in terms of convexifactors.


Journal of Global Optimization | 2013

Approximate KKT points and a proximity measure for termination

Joydeep Dutta; Kalyanmoy Deb; Rupesh Tulshyan; Ramnik Arora

Karush–Kuhn–Tucker (KKT) optimality conditions are often checked for investigating whether a solution obtained by an optimization algorithm is a likely candidate for the optimum. In this study, we report that although the KKT conditions must all be satisfied at the optimal point, the extent of violation of KKT conditions at points arbitrarily close to the KKT point is not smooth, thereby making the KKT conditions difficult to use directly to evaluate the performance of an optimization algorithm. This happens due to the requirement of complimentary slackness condition associated with KKT optimality conditions. To overcome this difficulty, we define modified


Journal of Optimization Theory and Applications | 2002

Convexifactors, generalized convexity, and optimality conditions

Joydeep Dutta; Suresh Chandra


Operations Research Letters | 2008

Generalized Nash equilibrium problem, variational inequality and quasiconvexity

Didier Aussel; Joydeep Dutta

{\epsilon}


Optimization | 2004

Monotonic analysis over cones: II

Joydeep Dutta; Juan Enrique Martínez-Legaz; Alexander M. Rubinov

Collaboration


Dive into the Joydeep Dutta's collaboration.

Top Co-Authors

Avatar

Kalyanmoy Deb

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephan Dempe

Freiberg University of Mining and Technology

View shared research outputs
Top Co-Authors

Avatar

V. Vetrivel

Indian Institutes of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marius Durea

Alexandru Ioan Cuza University

View shared research outputs
Top Co-Authors

Avatar

Rupesh Tulshyan

Indian Institute of Technology Kanpur

View shared research outputs
Top Co-Authors

Avatar

S. Nanda

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

C. Charitha

University of Göttingen

View shared research outputs
Researchain Logo
Decentralizing Knowledge