Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Keiki Takadama is active.

Publication


Featured researches published by Keiki Takadama.


Artificial Life and Robotics | 1998

Learning model for adaptive behaviors as an organized group of swarm robots

Keiki Takadama; Koichiro Hajiri; Tatsuya Nomura; Katsunori Shimohara; Michio Okada; Shinichi Nakasuka

This paper describes a novel organizational learning model for multiple adaptive robots. In this model, robots acquire their own appropriate functions through local interactions among their neighbors, and get out of deadlock situations without explicit control mechanisms or communication methods. Robots also complete given tasks by forming an organizational structure, and improve their organizational performance. We focus on the emergent processes of collective behaviors in multiple robots, and discuss how to control these behaviors with only local evaluation functions, rather than with a centralized control system. Intensive simulations of truss construction by multiple robots gave the following experimental results: (1) robots in our model acquire their own appropriate functions and get out of deadlock situations without explicit control mechanisms or communication methods; (2) robots form an organizational structure which completes given tasks in fewer steps than are needed with a centralized control mechanism.


ieee international conference on evolutionary computation | 1998

Multiagent reinforcement learning with organizational-learning oriented classifier system

Keiki Takadama; S. Nakasuka; T. Terano

Organizational learning oriented classifier system (OCS) is a new architecture proposed by us for an evolutionary computational model. We have shown its effectiveness in large scale problems with printed circuit board (PCB) redesign using computer aided design (CAD). The paper proposes a novel reinforcement learning method for multiagents with OCS for more practical and engineering use. To validate the effectiveness of our method, we have conducted experiments on real scale PCB design problems for electric appliances. The experimental results have suggested that: (1) our method has found feasible solutions with the same quality of those by human experts; (2) the solutions are globally better than those by the conventional reinforcement learning methods with regard to both the total wiring length and the number of iterations.


IWLCS '00 Revised Papers from the Third International Workshop on Advances in Learning Classifier Systems | 2000

Learning Classifier Systems Meet Multiagent Environments

Keiki Takadama; Takao Terano; Katsunori Shimohara

An Organizational-learning oriented Classifier System(OCS) is an extension of Learning Classifier Systems (LCSs) to multiagent environments, introducing the concepts of organizational learning (OL) in organization and management science. Unlike conventional research on LCSs which mainly focuses on single agent environments, OCS has an architecture for addressing multiagent environments. Through intensive experiments on a complex scalable domain, the following implications have been revealed: (1) OCS finds good solutions at small computational costs in comparison with conventional LCSs, namely the Michigan and Pittsburgh approaches; (2) the learning mechanisms at the organizational level contribute to improving the performance in multiagent environments; (3) an estimation of environmental situations and utilization of records of past situations/actions must be implemented at the organizational level to cope with non-Markov properties in multiagent environments.


pacific rim international conference on artificial intelligence | 1998

Amalyzing the Roles of Problem Solving and Learning in Organizational-Learning Oriented Classifier System

Keiki Takadama; Shinichi Nakasuka; Takao Terano

This paper analyzes the roles of problem solving and learning in Organizational-learning oriented Classifier System (OCS) from the viewpoint of organizational learning in organization and management sciences, and validates the effectiveness of the roles through the experiments of large scale problem for Printed Circuit Boards (PCBs) re-design in the Computer Aided Design (CAD). OCS is a novel multiagent-based architecture, and is composed of the following four mechanisms: (1) reinforcement learning, (2) rule generation, (3) rule exchange, and (4) organizational knowledge utilization. In this paper, we discuss that the four mechanisms in OCS work respectively as an individual performance/concept learning and an organizational performance/concept learning in organization and management sciences. Through the intensive experiments on the re-design problems of real scale PCBs, the results suggested that four learning mechanisms in individual/organizational levels contribute to finding not only feasible part placements in fewer iterations but also the shorter total wiring length than the one by human experts.


ieee international conference on intelligent processing systems | 1997

A computational group dialogue model with organizational learning

Keiki Takadama; Koichiro Hajiri; Tatsuya Nomura; M. Okada; K. Shinohara; Shinichi Nakasuka

This paper proposes a computational group dialogue model with organizational learning in which the agents adapt to the groups through communication. As dementia becomes a serious social problem, it is necessary to apply the model which has a mechanism of adaptation to the groups of dementia patients, in order that patients have the chance to adapt to the groups through communication. In the simulations, the agents communicate with other agents in the groups and learn their own dialogue strategies for adaptation to the groups, establishing their own opinions through communication.


Advanced Robotics | 1997

Organizational learning model for adaptive collective behaviors in multiple robots

Keiki Takadama; Koichiro Hajiri; Tatsuya Nomura; Katsunori Shimohara; Shinichi Nakasuka

This paper proposes a novel organizational learning model in which multiple robots acquire their own functions for adaptive collective behaviors through local interactions among their neighbors and form an organizational structure to complete given tasks without global explicit control mechanisms or communication methods. In this paper, we focus on emergent processes in which robots dynamically form an organizational structure by acquiring their own appropriate functions to complete given tasks effectively and also focus on how organizational knowledge supports robots to reform their organizational structure. Through intensive simulations of truss construction by multiple robots, the following experimental results have suggested: (1) robots in our model acquire their own appropriate functions without global explicit control mechanisms or communication methods and form an organizational structure which completes given tasks in less steps than those with a centralized control system, and (2) organizational ...


Systems and Computers in Japan | 1999

An approach to printed circuit board design with organizational learning agents

Keiki Takadama; Shinichi Nakasuka; Takao Terano

This paper presents an organizational learning classifier system supporting multiple agents that act while varying their decision criteria (local evaluation functions). Besides, a new method of multiagent reinforcement learning is developed for this system, and it is applied to part placement in printed circuit board design problems of a CAD domain. In this field, many techniques of global optimization were proposed to reduce design cycle and production cost. Such techniques, however, could not offer more efficient part placement than human experts so that most important final decisions often had to be left to humans. On the other hand, with the system proposed in this study, part placement is not treated from a global viewpoint; instead, decisions about placement of individual parts are made by agents in charge through local interactions. Results of applying the proposed system to a full-scale real-life problem suggest the following. (1) Our system offered practicable solutions that were competitive with those by human experts. (2) Our system outperformed existing methods of reinforcement learning in terms of both convergence speed and total wiring length.© 1999 Scripta Technica, Syst Comp Jpn, 30(11): 33–42, 1999


international conference on industrial electronics control and instrumentation | 2000

On the evolution of interaction rules in a canonical auction market with simple bidding agents

Norberto Eiji Nawa; Keiki Takadama; Katsunori Shimohara; Osamu Katai

Auction markets have been continually attracting attention in the field of economics due to their interesting properties as trading institutions. The recent boom of electronic markets over the Internet has also sparked related research in the field of artificial intelligence (AI). The main aspects investigated concerning electronic markets are the construction of automated negotiating agents, and the design of mechanisms and protocol rules to coordinate their interaction. In this paper, the construction of rules, by a genetic algorithm, to coordinate the bidders interaction in a canonical auction market is investigated. Auction rules have been deeply investigated in scenarios with human actors, where commonsense protocols naturally prevail, restricting the possibilities of using idiosyncratic interaction procedures. By means of computational experiments, we show that in a hypothetical situation where the bidders follow very simple strategies, non-conventional auction rules can perform better than conventional protocols.


Applied Intelligence | 1998

Printed Circuit Board Design via Organizational-Learning Agents

Keiki Takadama; Shinichi Nakasuka; Takao Terano


Archive | 1996

Emergent Strategy for Adaptive Behaviors with Self-Organizational Individuality

Keiki Takadama; Shinichi Nakasuka

Collaboration


Dive into the Keiki Takadama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge