Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Seiichi Iwamoto is active.

Publication


Featured researches published by Seiichi Iwamoto.


Journal of Mathematical Analysis and Applications | 1977

Inverse theorem in dynamic programming III

Seiichi Iwamoto

Recently Iwamoto [I, 21 has established Inverse Theorem in Dynamic Programming by a dynamic programming method. He has shown that the maximum-value function of the main problem is the inverse function to the minimumvalue function of the inverse problem provided that the objective function and the constraint function satisfy the dynamic programming structure, namely, recursiveness with monotonicity. In this paper we shall investigate a relation between the maximum-point (point which attains the maximum) function of the main problem and thr: minimum-point (point which attains the minimum) function of the Inverse problem as well as the inverse relation between the maximum-value and minimum-value functions. Our approach to and statement of Inverse Theorem II in Dynamic Programming are slightly different from ones in [I, 21. Our inverse theorem claims that the solution (maximum-value and maximumpoint) functions of the main problem characterize the solution (minimum-value and minimum-point) functions of the inverse problem in an inverse sense and vice versa. We also give several corollaries in Section 2. Section 3 is devoted to the illustration of several examples which verify the inverse theorem and its corollaries. The last section comments on related topics [4, 51.


Archive | 2002

Controlled Markov Chains with Utility Functions

Seiichi Iwamoto; Takayuki Ueno; Toshiharu Fujita

In this paper we consider finite-stage stochastic optimization problems of utility criterion, which is the stochastic evaluation of associative reward through a utility function. We optimize the expected value of a utility criterion not in the class of Markov policies but in the class of general policies. We show that, by expanding the state space, an invariant imbedding approach yields an recursive relation between two adjacent optimal value functions. We show that the utility problem with a general policy is equivalent to a terminal problem with a Markov policy on the augmented state space. Finally it is shown that the utility problem has an optimal policy in the class of general policies on the original state space.


Applied Mathematics and Computation | 2001

An optimistic decision-making in fuzzy environment

Toshiharu Fujita; Seiichi Iwamoto

Bellman and Zadeh have originated three systems of multistage decision processes in a fuzzy environment: deterministic, stochastic and fuzzy systems. In this article, we consider an optimization problem with an optimistic criterion on a fuzzy system. By making use of minimization-maximization expectation in a fuzzy environment, we derive a recursive equation for the fuzzy decision process through invariant imbedding approach. By illustrating a three-state, two-decision and two-stage model, we give an optimal solution through dynamic programming. The optimal solution is also verified by the method of multistage fuzzy decision tree-table.


Journal of Mathematical Analysis and Applications | 1970

Stopped decision processes on complete separable metric spaces

Nagata Furukawa; Seiichi Iwamoto

In this paper we shall treat the combined problems of optimal control and optimal stopping in the discrete time stochastic systems. These combined problems are instituted in view of the practical use. When concerned with the dynamic programming problems or more generally with the multistage stochastic decision problems, even if in the case of the infinite horizon, we are often in the situation that it is forced or profitable to stop the choice of control actions some day. Such the combined ones come under the optimal control problems, if we define anew a decision process having the union of a “fictitious” absorbing state and the original state space as a new state space (cf. [7]). But following a control policy in terms of the new-defined decision process may not become to stop with probability one in terms of the original decision process. Therefore, if the interest is restricted to the control policies that stop with probability one, the approach by the new-defined decision process as above may be inappropriate. Roughly speaking, a stopped decision process is a decision process which stops with probability one, and a stopped policy is a policy associated with a stopping time which is finite with probability one (for precise definitions, see Section 2). In this paper we study the existence theorems of an optimal stopped policy associated with some optimality criterions in multistage stochastic decision problems, and there the methods of [3], [4], [5], and [9] are available. In Section 2, we give the notations and definitions to be used throughout this paper, and in Section 3 we prepare the fundamental Lemmas to be used in Section 4. Section 4 is devoted to the existence of an optimal stopping time associated with a policy. The study of such the optimal stopping time is not our main object, but the preparatory consideration for the existence of the optimal stopped policy in the subsequent sections. In Section 5, we give the existence theorems of a (p, c)-optimal stopped policy and of a (p, E, S)-optimal


Journal of Mathematical Analysis and Applications | 1983

Reverse function, reverse program, and reverse theorem in mathematical programming

Seiichi Iwamoto

Abstract First, our main result is the reverse theorem in mathematical programming problems, where objective and constraint functions satisfy “separability” and “strict monotonicity” in dynamic programming. Second, the reverse functions to such functions are defined anew, and their properties and explicit forms are specified. The definition of reverse function is a sequentially parametric extension of the usual definition of inverse function. Third, it is shown that a reverse operation commutes with dual and inverse operations provided that dual and inverse theorems hold, respectively. Finally, three multiconstraint main programs, together with their reverse ones, including a linear main program together with its reverse, dual and reverse-dual ones, five single-constraint main programs together with their inverse, reverse and reverse-inverse ones, and three unconditional main programs with their reverse ones are analytically solved by applications of the reverse theorem.


Applied Mathematics and Computation | 2001

A class of dual fuzzy dynamic programs

Seiichi Iwamoto

In this paper we propose a large class of fuzzy dynamic programs. By use of the notion of dual binary relation we define a dual fuzzy dynamic program in the class. We establish two duality theorems between primal and dual fuzzy dynamic programs. One is for the two-parametric recursive equations. The other is for the nonparametric. We specify maximum-minimum process and minimum-minimum process in fuzzy environment and multiplicative-multiplicative process in quasi-stochastic environment. It is shown that the duality theorems hold between primal and dual programs.


Journal of Mathematical Analysis and Applications | 1979

Some operations on dynamic programmings with one-dimensional state space

Seiichi Iwamoto

From the practical viewpoint, dynamic programming (DP), namely, Bellman’s Principle of Optimality [l] has been applied in engineering, economics and operations research. On the other hand, its theoretical aspect has been analyzed by Mitten [7], Nemhauser [8] and others. Nevertheless, it seems that many research workers have paid their attention to “a” DP itself in the individual case. In this paper we are concerned with a class of dynamic programmings with one-dimensional state space. The n-th feasible action space A,(s,) at state s, is assumed to be independent of s, , namely, A,(s,) = A, for all s, . We focus our attention on the relationship between these DP’s. One-dimensionality of state space enables us to develop an algebraic theory of DP. Such algebraic or automaton-like operations as inverse, reversal, composition, concatenation, maximum and minimum are introduced on these DP’s. Section 2 defines the fundamental operations on the class of all strictly increasing functions from [0, 03) onto [0, cc). The properties concerning these operations suggest results of Section 3. Our main results are Inverse Theorem, Reverse Theorem and Decomposition Theorem: The first is a version of author’s Inverse Theorem [2-6]. The second is new. It has a broard applicability. The third is a refinement of Nemhauser’s decomposition or Mitten’s composition. Another results are interesting algebraic relations between DP’s generated by the above operations (Section 3). Illustrating a simple DP, the last section applies Inverse Theorem and Reverse Theorem.


Archive | 2001

Recursive method in stochastic optimization under compound criteria

Seiichi Iwamoto

In this paper we propose a recursive method in stochastic optimization problems with compound criteria. By introducing four (Markov, general, primitive and expanded Markov) types of policy, we establish an equivalence among three (general, expanded Markov and primitive) policy classes. It is shown that there exists an optimal policy in general class. Further we apply this result to range, ratio and variance problems. We derive both forward recursive formula for past-value sets and backward recursive formula for value functions. The compound criteria is large for economic decision processes.


Journal of Mathematical Analysis and Applications | 1985

Sequential minimaximization under dynamic programming structure

Seiichi Iwamoto

On etudie les operations sequentielles des operateurs non lineaires monotones min xd i et max yd j pour la fonction a valeur reelle f(x i ,...,x i ,...,x M ,y i ,...,y j ,...,y N ), ou M,N=0,1,.... Dans le cas M=0 ou N=0, le probleme revient a etudier la programmation dynamique


international conference on knowledge based and intelligent information and engineering systems | 1998

Decision-making in fuzzy environment: a survey from stochastic decision process

Seiichi Iwamoto

Interpreting the stochastic systems in a fuzzy environment as given by Bellman and Zadeh (1970) in the frame of stochastic decision process, we present two approaches for the multi-stage stochastic decision process: one is an invariant embedding method, and the other is an a posteriori-conditional decision process.

Collaboration


Dive into the Seiichi Iwamoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Toshiharu Fujita

Kyushu Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yutaka Kimura

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

知文 吉良

Akita Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge