Atsushi Ueno
Nara Institute of Science and Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Atsushi Ueno.
computational intelligence in robotics and automation | 2003
Vachirasuk Setalaphruk; Atsushi Ueno; Izuru Kume; Yasuyuki Kono; Masatsugu Kidode
This paper presents a new robot navigation system that can operate on a sketch floor map provided by a user. This sketch map is similar to floor plans as shown at the entrance of buildings, which does not contain accurate metric information and details such as obstacles. The system enables a user to give navigational instructions to a robot by interactively providing a floor map and pointing out goal positions on the map. Since metric information is unavailable, navigation is done using an augmented topological map which described the structure of the corridors extracted from a given floor map. Multiple hypotheses of the robots location are maintained and updated during navigation in order to cope with sensor aliasing and landmark-matching failures due to factors such as unknown obstacles inside the corridors.
intelligent robots and systems | 1996
Atsushi Ueno; Koichi Hori; Shinichi Nakasuka
This paper describes a system with which a cognitive agent learns the way of abstraction and the policy of behavior selection simultaneously. We call the system situation transition network system (STNS). The system extracts situations and maintains them dynamically in the continuous state space on the basis of rewards from the environment. In this way, the system learns the way of abstraction in a dynamic environment. At the same time, the system stores results of transitions between situations and constructs a network of situations. This network is used for partial planning. At a point of time in the learning process, the system selects a behavior according to the partial plan. Because the planning is performed on a network of the abstracted situations, the agent with STNS does not have to deliberate details in planning. Furthermore, the agent can make a plan even on the early stage of learning because the planning is partial. Owing to the simultaneous learning with task executions the agent can adapt to the current task. The results of computer simulations are given.
systems man and cybernetics | 1999
Atsushi Ueno; Hideaki Takeda; Toyoaki Nishida
Real robots should be able to adapt flexibly to various environments. The main problem is how to abstract useful information from a huge amount of information in the environment. This is called the frame problem. The paper proposes a new architecture which can learn how to perform abstraction while executing the task. We call the architecture the situation transition network system (STNS). By this architecture, a robot can acquire a necessary and sufficient symbol system for the current task and environment. Furthermore, this symbol system is flexible enough to adapt to changes of the environment. STNS performs cognitive learning and behavior learning parallelly while executing the task. In cognitive learning, it extracts situations and maintains them dynamically in the continuous state space on the basis of rewards from the environment. A situation can be regarded as an empirically obtained symbol. In behavior learning, it constructs a MDP (Markov decision problem) model of the environment on the abstracted situation representation. This model is used for planning of behavior. The validity of STNS is shown in computer simulations.
intelligent robots and systems | 1999
Atsushi Ueno; Hideaki Takeda; Toyoaki Nishida
Reinforcement learning is very useful for robots with little a priori knowledge in acquiring appropriate behavior. This paper describes a learning system which can learn a state representation and a behavior policy simultaneously while executing the task. We call the system - the situation transition network system. As cognitive learning, it extracts situations and maintains them dynamically in the continuous state space on the basis of rewards from the environment. As behavior learning, it leads to a Markov decision model of environment and performs partial planning on the model. This is a kind of reinforcement learning. The results of computer simulations are given.
computational intelligence in robotics and automation | 2003
Akihiro Kobayashi; Yasuyuki Kono; Atsushi Ueno; Izuru Kume; Masatsugu Kidode
This paper presents a middleware architecture for personal robots applied to various environments. The architecture allows a robot to consistently integrate environment-oriented applications with its original and familiar characteristics for its user. The familiar characteristics and environment-oriented applications tend to be developed independently. However, the two kinds of functions should share sensors and actuators to generate consistent actions. To this end, we have analyzed the relationship between robot actions and their mental effects on the user, and have designed middleware, called the mediator, to play the role of mediator by dynamically selecting either: 1) sequential execution, 2) time-sharing execution, or 3) concurrent execution. We had an experiment to simulate the mediation, and to estimate the familiarity and efficiency.
New Generation Computing | 2001
Atsushi Ueno; Hideaki Takeda
Real robots should be able to adapt autonomously to various environments in order to go on executing their tasks without breaking down. They achieve this by learning how to abstract only useful information from a huge amount of information in the environment while executing their tasks. This paper proposes a new architecture which performs categorical learning and behavioral learning in parallel with task execution. We call the architectureSituation Transition Network System (STNS). In categorical learning, it makes a flexible state representation and modifies it according to the results of behaviors. Behavioral learning is reinforcement learning on the state representation. Simulation results have shown that this architecture is able to learn efficiently and adapt to unexpected changes of the environment autonomously.
robot soccer world cup | 2000
Kazunori Terada; Kouji Mochizuki; Atsushi Ueno; Hideaki Takeda; Toyoaki Nishida; Takayuki Nakamura; Akihiro Ebina; Hiromitsu Fujiwara
In recent years, many researchers in AI and Robotics pay attention to RoboCup, because robotic soccer games needs various techniques in AI and Robotics, such as navigation, behavior generation, localization and environment recognition. Localization is one of the important issues for RoboCup. In this paper, we propose a method of robots localization by integrating vision and modeling of the environment. The environment model that realizes the robotic soccer filed in the computer can produce an image of robots view at any location. In the environment model, the system can search and appropriate location of which view image is similar to the view image by the real robot. Our robot can estimate location from goals height and aspect ratio on the camera image. We search the most suitable position with hill-climbing algorithm from the estimated location. We programmed this method, and tested validity. The error range is reduced from 1m-50cm by robots estimation from 40cm-20cm by this method. This method is superior to the other methods using dead reckoning or range sensor with map because it does not depend on the field size on precision, and does not need walls as landmark.
systems man and cybernetics | 2001
Atsushi Ueno; H. Soeda; I. Takeda; M. Kidode
In this paper, we are concerned with the problem of how a physical robot can get an appropriate internal representation to its task and environment. Learning from experience is effective for the problem, but it is very time-consuming to learn a representation from the beginning in a real environment. On the other hand, the representation learned only in a simulated environment has the risk of not serving the purpose in a real environment because of the uncertainty in sensors, actuators, and the environment. In, order to have the best of both worlds, it is effective to transplant the learned state representation of a virtual agent to a physical robot. For this purpose., we improved our developed incremental learning architecture for use in the real environment and developed a new architecture, called STNS-R. In this architecture, inappropriate negative instances caused by uncertainties are found on the basis of the distribution of instances and removed in order to correct the distorted shapes of the states. The effectiveness of STNS-R is shown in the experimental results.
intelligent robots and systems | 2000
Hideaki Takeda; Atsushi Ueno; Motoki Saji; Tsuyoshi Nakano; Kei Miyamato
We discuss roles of robots as autonomous knowledge media and show our prototype system of an office work assistant robot based on this approach. We are surrounded by a huge amount of artifacts and information that is difficult for humans to deal with. Robots can help people by gathering and arranging information intelligently instead of people themselves. Our prototype system called Kappa III is an office work assistant robot that can tell people the location of daily goods in the office. It firstly looks around to capture images of desks in order to remember what and where such goods are. Then it can identify them by cutting them out from the background and categorizing them by color, shape, and figure. People can ask it to take goods in the office by either specifying names or features such as color. We also realized discovery of objects three-dimensionally by comparing captured scenes with expected ones.
Systems and Computers in Japan | 2004
Yoshihide Yamashiro; Atsushi Ueno; Hideaki Takeda