Shigemi Aoyagi
Nippon Telegraph and Telephone
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Shigemi Aoyagi.
conference on computer supported cooperative work | 2008
Naomi Yamashita; Keiji Hirata; Shigemi Aoyagi; Hideaki Kuzuoka; Yasunori Harada
In this study, we examine how changes in seating position across different sites affect video-mediated communication. We experimentally investigated the effects of altering seating positions on conversations in four-person group communication, two-by-two at identical locations: distant parties seated across from each other vs. distant parties seated side-by-side. In the latter seating arrangement, we found that speaker switches were more evenly distributed between distance-separated participants and co-located participants at points without verbal indication of the next speaker. Participants shared a higher sense of unity and reached a slightly better group solution. These findings demonstrate the importance of providing people with various seating arrangements across distant sites to facilitate different group activities.
global communications conference | 2008
Keiji Hirata; Yasunori Harada; Toshihiro Takada; Shigemi Aoyagi; Yoshinari Shirai; Naomi Yamashita; Katsuhiko Kaji; Junji Yamato; Kenji Nakazawa
In this paper, we present t-Room, the next generation video communication system we are developing. Our approach is to build rooms with identical layouts, including walls of display panels on which users and physical or virtual objects are all shown at life-size. In this way, the user space enclosed by t-Rooms surrounding displays can be shared as a common space at any other site. In other words, the enclosed spaces overlap each other. This configuration effectively provides symmetric reproduction of the audio-visual information surrounding local and remote users and objects. The feeling provided by t-Room is different from that by conventional videoconferencing systems, since there is no spatial barrier separating users such as the video screen of a conventional videoconferencing system. Furthermore, t-Room benefits in every way from Next Generation Network (NGN) technology: QoS, service productivity, and security. We view t-Room as a future form of telephone service.
Proceedings of the 8th European Workshop on Modelling Autonomous Agents in a Multi-Agent World: Multi-Agent Rationality | 1997
Satoshi Kurihara; Shigemi Aoyagi; Rikio Onai
This paper proposes and evaluates a methodology for multi-agent real-time reactive planning. In addition to the feature of conventional real-time reactive planning, which can react in a dynamic environment, our planning can perform deliberate planning when, for example, the robot has enough time to plan its next action. The proposed planning features three kinds of agents: a behavior agent that controls simple behavior, a planning agent that makes plans to achieve its goals, and a behavior selection agent that intermediates between behavior agents and planning agents. They coordinate a plan in an emergent way for the planning system as a whole. We confirmed the effectiveness of our planning by means of a simulation. Furthermore, we implemented an active vision system, which is the first stage of building the real-world agent, and used it to verify the real-world effectiveness of our planning.
Robotics and Autonomous Systems | 1998
Satoshi Kurihara; Shigemi Aoyagi; Rikio Onai; Toshiharu Sugawara
This paper proposes and evaluates a new real-time reactive planning approach for a dynamic environment. In addition to having the features of conventional real-time reactive planning, which can react in a dynamic environment, our planning can perform deliberate planning appropriately. The proposed planning uses three kinds of agents: behavior agents that control simple behavior, planning agents that make plans to achieve their own goals, and behavior-selection agents that intermediate between behavior agents and planning agents. They coordinate a plan in an emergent way for the planning system as a whole. We confirmed the effectiveness of our planning by means of a simulation. Furthermore, we implemented an active-vision system and used it to verify the real-world efficiency of our planning.
pacific rim conference on communications, computers and signal processing | 1999
Koji Sato; Toshihiro Takada; Shigemi Aoyagi; Toshio Hirotsu; Toshiharu Sugawara
This paper presents a novel framework for seamlessly integrating continuous media, such as audio and video, with the World Wide Web (WWW). Continuous Media with the Web (Cmew) enhances the interactivity of continuous media by associating hyperlinks with spatial-temporal parts of the media. The scenario control architecture in Cmew provides flexible and dynamic control over continuous media in multimedia documents. The Cmew media player has been implemented as a Java applet, which enables its use in the current WWW environment.
international conference on multimedia computing and systems | 1996
Mitsukazu Washisaka; Toshihiro Takada; Shigemi Aoyagi; Rikio Onai
This paper describes a system for two-way linkage between video and its transcribed text that supports observational data analysis. One distinct feature of the system is that a specific area in the video image is linked to the text. The luminance (Y) signal of the video image is used to assist in linkage establishment from video to text. Another feature is that it is possible to extract concepts of the search term and to compare these with the text data on a conceptual level.
adaptive agents and multi-agents systems | 2004
Toshiharu Sugawara; Satoshi Kurihara; Kensuke Fukuda; Toshio Hirotsu; Shigemi Aoyagi; Toshihiro Takada
Recently, we proposed an intelligent ubiquitous computing (ubicomp) environment where sensors and/or their stations/servers have CPUs to cooperatively learn generalized series of sensed events that are involved in human activities. This can be regarded as a multi-agent application. Because ubicomp applications target support for daily-life activities, one of their characteristics is that the same/similar series of events occurs frequently. Multi-agent plans in applications of this type are used to foresee human activities and generate programs to assist them. Therefore, the same planning processes for conflict detection and resolution recur. This paper proposes a learning method in which past plans are exploited for problem solving in an environment where the same/similar problems appear repeatedly. We discuss how the plan is stored and reused using as an example the exploration of conflict-free routes in a room and then describe experimental results.
global communications conference | 1991
Shigemi Aoyagi; K.-I. Sano; E. Yoneda
Various approaches to realizing economical FTTH (fiber-to-the-home) systems via the use of single mode fibers are being developed. A strategy is discussed for realizing such systems, especially STM (synchronous transfer mode) based high-speed digital FTTH systems, which can provide not only N-ISDN services but also high-speed digital services such as center-to-end video services. The first prototype system of a single-star FTTH system is also described.<<ETX>>
conference on multimedia computing and networking | 2003
Shigemi Aoyagi; Ken'ich Kourai; Koji Sato; Toshihiro Takada; Toshiharu Sugawara; Rikio Onai
In this paper, we propose a new time-reduction method for video skimming in which the focus is on the overall playback time. While fast-forwarding is a natural way to check whether or not items are of interest, the sound is not synchronized with the images and the lack of comprehensible audio data means that we must work from the images alone. The focus in video summarization has been solely on video segmentation, i.e. building a structure that represents the parts and flow of meaning in the video. In our system, the user simply specifies the running time required for the summarized video. We describe the current state of our prototype system and its results in testing, which show how well it works.
asia-pacific computer and human interaction | 2004
Toshiharu Sugawara; Satoshi Kurihara; Shigemi Aoyagi; Koji Sato; Toshihiro Takada
This paper discusses a method for identifying clickable objects/regions in still and moving images when they are being captured. A number of methods and languages have recently been proposed for adding point-and-click interactivity to objects in moving pictures as well as still images. When these pictures are displayed in Internet environments or broadcast on digital TV channels, users can follow links specified by URLs (e.g., for buying items online or getting detailed information about a particular item) by clicking on these objects. However, it is not easy to specify clickable areas of objects in a video because their position is liable to change from one frame to the next. To cope with this problem, our method allows content creators to capture moving (and still) images with information related to objects that appear in these images including the coordinates of the clickable areas of these objects in the captured images. This is achieved by capturing the images at various infrared wavelengths simultaneously. This is also applicable to multi-target motion capture.