Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshinori Kuno is active.

Publication


Featured researches published by Yoshinori Kuno.


IEEE Robotics & Automation Magazine | 2003

Look where you're going [robotic wheelchair]

Yoshinori Kuno; Nobutaka Shimada; Yoshiaki Shirai

We propose a robotic wheelchair that observes the user and the environment. It can understand the users intentions from his/her behaviors and the environmental information. It also observes the user when he/she is off the wheelchair, recognizing the users commands indicated by hand gestures. Experimental results show our approach to be promising. Although the current system uses face direction, for people who find it difficult to move their faces, it can be modified to use the movements of the mouth, eyes, or any other body parts that they can move. Since such movements are generally noisy, the integration of observing the user and the environment will be effective in understanding the real intentions of the user and will be a useful technique for better human interfaces.


ieee international conference on automatic face and gesture recognition | 1998

Hand gesture estimation and model refinement using monocular camera-ambiguity limitation by inequality constraints

Nobutaka Shimada; Yoshiaki Shirai; Yoshinori Kuno; Jun Miura

The paper proposes a method to precisely estimate the pose (joint angles) of a moving human hand and also refine the 3D shape (widths and lengths) of the given hand model from a monocular image sequence which contains no depth data. First, given an initial rough shaped 3D model, possible pose candidates are generated in a search space efficiently reduced using silhouette features and motion prediction. Then, selecting the candidates with high posterior probabilities, the rough poses are obtained and the feature correspondence is resolved even under quick motion and self occlusion. Next, in order to refine both the 3D shape model and the rough pose under the depth ambiguity in monocular images, the paper proposes an ambiguity limitation method by loose constraint knowledge of the object represented as inequalities. The method calculates the probability distribution satisfying both the observation and the constraints. When multiple solutions are possible, they are preserved until a unique solution is determined. Experimental results show that the depth ambiguity is incrementally reduced if the informative observations are obtained.


international conference on computer vision | 1993

Robust structure from motion using motion parallax

Roberto Cipolla; Yasukasu Okamoto; Yoshinori Kuno

An efficient and geometrically intuitive algorithm for reliably interpreting the image velocities of moving objects in 3-D is presented. It is well known that under a weak perspective the image motion of points on a plane can be characterized by an affine transformation. It is shown that the relative image motion of a nearby non-coplanar point and its projection on the plane is equivalent to motion parallax, and because it is independent of view rotations it is a reliable geometric cue to 3-D shape and viewer/object motion. The authors summarize why structure from motion algorithms are often very sensitive to errors in the measured image velocities and then show how to efficiently and reliably extract an incomplete qualitative solution. They also show how to augment this into a complete solution if additional constraints or views are available. A real-time example is presented in which the 3-D visual interpretation of hand gestures or a hand-held object is used as part of a man-machine interface. This is an alternative to the Polhemus coil instrumented Dataglove commonly used in sensing manual gestures.<<ETX>>


human factors in computing systems | 2008

Precision timing in human-robot interaction: coordination of head movement and utterance

Akiko Yamazaki; Keiichi Yamazaki; Yoshinori Kuno; Matthew Burdelski; Michie Kawashima; Hideaki Kuzuoka

As research over the last several decades has shown that non-verbal actions such as face and head movement play a crucial role in human interaction, such resources are also likely to play an important role in human-robot interaction. In developing a robotic system that employs embodied resources such as face and head movement, we cannot simply program the robot to move at random but rather we need to consider the ways these actions may be timed to specific points in the talk. This paper discusses our work in developing a museum guide robot that moves its head at interactionally significant points during its explanation of an exhibit. In order to proceed, we first examined the coordination of verbal and non-verbal actions in human guide-visitor interaction. Based on this analysis, we developed a robot that moves its head at interactionally significant points in its talk. We then conducted several experiments to examine human participant non-verbal responses to the robots head and gaze turns. Our results show that participants are likely to display non-verbal actions, and do so with precision timing, when the robot turns its head and gaze at interactionally significant points than when the robot turns its head at not interactionally significant points. Based on these findings, we propose several suggestions for the design of a guide robot.


international conference on pattern recognition | 1996

Object tracking in cluttered background based on optical flow and edges

Yasushi Mae; Yoshiaki Shirai; Jun Miura; Yoshinori Kuno

This paper describes a method of determining contours of moving objects in a cluttered scene by integrating optical flow and edges. If the motion of an object is similar to that of the background, the contour is not determined only by optical flow. If the background of a scene is cluttered, the contour is not determined only from edges because many edges may be extracted in the background and no edges may be extracted on some parts of the contour. In the proposed method, the contour is determined by using optical flow and edges in a long sequence. The whole contour of a moving object is eventually obtained by accumulating edges near motion boundaries over an image sequence. The method can also determine the occlusion relation of two overlapping objects by checking if edges exist on the predicted contours of objects. Experimental results for synthetic and real images show the usefulness of the method.


Heart | 1983

Three dimensional reconstruction of the left ventricle from multiple cross sectional echocardiograms. Value for measuring left ventricular volume.

Hitoshi Sawada; Junichi Fujii; Kazuzo Kato; Morio Onoe; Yoshinori Kuno

The accuracy of a system for reconstructing a three dimensional image of the left ventricle from randomly recorded multiple short axis images was tested by comparing the calculated left ventricular volume with the directly measured left ventricular volume in 11 excised porcine hearts. The system comprised a real time phased array sector scanner, a transducer locating system, and a computer system for digitising outlines of the left ventricle, displaying the reconstruction image, and calculating the left ventricular volume. The reconstructed image was similar to the real image and the calculated left ventricular volume showed a high correlation with the directly measured left ventricular volume. This method was accurate in vitro and is expected to be available for clinical measurement of left ventricular volume.


intelligent robots and systems | 1999

Robotic wheelchair based on observations of both user and environment

Satoru Nakanishi; Yoshinori Kuno; Nobutaka Shimada; Yoshiaki Shirai

With the increase in the number of senior citizens, there is a growing demand for human-friendly wheelchairs as mobility aids. To meet this need, we proposed a robotic wheelchair which can be controlled by turning our face in the direction where we would like to go. Although it can be used easily, there is a problem that unintentional movements of our face may interfere with the wheelchair motion. The paper presents our new version of the wheelchair improved by observing both user and environment. It effectively integrates autonomous capabilities and the interface by face direction. It uses the sensor information obtained for autonomous navigation to solve the problem with the control by face direction. Also, if it can understand the users intentions from observing the face, it chooses an appropriate autonomous navigation function to reduce the users burden of operation.


intelligent robots and systems | 1998

Intelligent wheelchair using visual information on human faces

Yoshihisa Adachi; Yoshinori Kuno; Nobutaka Shimada; Yoshiaki Shirai

With the increase of senior citizens, there is a growing demand for human-friendly wheelchairs as mobility aids. The paper proposes a concept of an intelligent wheelchair to meet this need. It can understand human intentions by observing the users nonverbal behaviors and can move in accordance with the users wish with minimum human operations. The paper also describes our experimental robotic wheelchair system. Human intentions appear most on the face. Thus, the experimental system observes the human face, computing its direction. As the first step toward the intelligent wheelchair, we have made experiments on controlling the systems motion by the face direction. Experimental results prove our approach promising.


human factors in computing systems | 2009

Revealing Gauguin: engaging visitors in robot guide's explanation in an art museum

Keiichi Yamazaki; Akiko Yamazaki; Mai Okada; Yoshinori Kuno; Yoshinori Kobayashi; Yosuke Hoshi; Karola Pitsch; Paul Luff; Dirk vom Lehn; Christian Heath

Designing technologies that support the explanation of museum exhibits is a challenging domain. In this paper we develop an innovative approach - providing a robot guide with resources to engage visitors in an interaction about an art exhibit. We draw upon ethnographical fieldwork in an art museum, focusing on how tour guides interrelate talk and visual conduct, specifically how they ask questions of different kinds to engage and involve visitors in lengthy explanations of an exhibit. From this analysis we have developed a robot guide that can coordinate its utterances and body movement to monitor the responses of visitors to these. Detailed analysis of the interaction between the robot and visitors in an art museum suggests that such simple devices derived from the study of human interaction might be useful in engaging visitors in explanations of complex artifacts.


intelligent robots and systems | 2009

Robotic wheelchair based on observations of people using integrated sensors

Yoshinori Kobayashi; Yuki Kinpara; Tomoo Shibusawa; Yoshinori Kuno

Recently, several robotic/intelligent wheelchairs have been proposed that employ user-friendly interfaces or autonomous functions. Although it is often desirable for user to operate wheelchairs on their own, they are often accompanied by a caregiver or companion. In designing wheelchairs, it is important to reduce the caregiver load. In this paper we propose a robotic wheelchair that can move with a caregiver side by side. In contrast to a front-behind position, in a side-by-side position it is more difficult for wheelchairs to adjust when the caregiver makes a turn. To cope with this problem we present a visual-laser tracking technique. In this technique, a laser range sensor and an omni-directional camera are integrated to observe the caregiver. A Rao-Blackwellized particle filter framework is employed to track the caregivers position and orientation of both body and head based on the distance data and panorama images captured from the laser range sensor and the omni-directional camera. After presenting this technique, we introduce an application of the wheelchair for museum visit use.

Collaboration


Dive into the Yoshinori Kuno's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akiko Yamazaki

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammed Moshiul Hoque

Chittagong University of Engineering

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge