Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mayumi Bono is active.

Publication


Featured researches published by Mayumi Bono.


international conference on multimodal interfaces | 2013

Context-based conversational hand gesture classification in narrative interaction

Shogo Okada; Mayumi Bono; Katsuya Takanashi; Yasuyuki Sumi; Katsumi Nitta

Communicative hand gestures play important roles in face-to-face conversations. These gestures are arbitrarily used depending on an individual; even when two speakers narrate the same story, they do not always use the same hand gesture (movement, position, and motion trajectory) to describe the same scene. In this paper, we propose a framework for the classification of communicative gestures in small group interactions. We focus on how many times the hands are held in a gesture and how long a speaker continues a hand stroke, instead of observing hand positions and hand motion trajectories. In addition, to model communicative gesture patterns, we use nonverbal features of participants addressed from participant gestures. In this research, we extract features of gesture phases defined by Kendon (2004) and co-occurring nonverbal patterns with gestures, i.e., utterance, head gesture, and head direction of each participant, by using pattern recognition techniques. In the experiments, we collect eight group narrative interaction datasets to evaluate the classification performance. The experimental results show that gesture phase features and nonverbal features of other participants improves the performance to discriminate communicative gestures that are used in narrative speeches and other gestures from 4% to 16%.


Lecture Notes in Computer Science | 2006

Conversational inverse information for context-based retrieval of personal experiences

Yasuhiro Katagiri; Mayumi Bono; Noriko Suzuki

Recent development of capture and archival technology for experiences can serve to extend our memory and knowledge and enrich our collaboration with others. Conversation is an important facet of human experiences. We focus on the conversational participation structure as a type of inverse information associated with human socio-interactional events. Based on an analysis of the Interaction Corpus collected in the Ubiquitous Sensor Room environment, we argue that inverse information can be effectively employed in the retrieval and re-experiencing of the subjective quality of the captured events.


international conference on universal access in human-computer interaction | 2014

The Practice of Showing ‘Who I am’: A Multimodal Analysis of Encounters between Science Communicator and Visitors at Science Museum

Mayumi Bono; Hiroaki Ogata; Katsuya Takanashi; Ayami Joh

In this paper, we try to contribute to the design of future technologies used in science museums where there is no explicit, pre-determined relationship regarding knowledge between Science Communicators (SCs) and visitors. We illustrate the practice of interaction between them, especially focusing on social encounter. Starting in October 2012, we conducted a field study at the National Museum of Emerging Science and Innovation (Miraikan) in Japan. Based on multimodal analysis, we examine various activities, focusing on how expert SCs communicate about science: how they begin interactions with visitors, how they maintain them, and how they conclude them.


Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction | 2012

Simple multi-party video conversation system focused on participant eye gaze: "Ptolemaeus" provides participants with smooth turn-taking

Saori Yamamoto; Nazomu Teraya; Yumika Nakamura; Narumi Watanabe; Yande Lin; Mayumi Bono; Yugo Takeuchi

This paper shows a prototype system that provides a natural multi-party conversation environment among participants in different places. Eye gaze is an important feature for maintaining smooth multi-party conversations because it indicates whom the speech is addressing or nominates the next speaker. Nevertheless, most popular video conversation systems, such as Skype or FaceTime, do not support the interaction of eye gaze. Serious confusion is caused in multi-party video conversation systems that have no eye gaze support. For example, who is the addressee of the speech? Who is the next speaker? We propose a simple multi-party video conversation environment called Ptolemaeus that realizes eye gaze interaction among more than three participants without any special equipment. This system provides natural turn-taking in face-to-face video conversations and can be implemented more easily than former schemes concerned with eye gaze interaction.


Journal of Ambient Intelligence and Smart Environments | 2018

Towards robots reasoning about group behavior of museum visitors: leader detection and group tracking

Karla Trejo; Cecilio Angulo; Shin'ichi Satoh; Mayumi Bono

The final publication is available at IOS Press through http://dx.doi.org/10.3233/AIS-170467


international symposium on artificial intelligence | 2011

Multimodality in Multispace Interaction (MiMI)

Mayumi Bono; Nobuhiro Furuyama

We held International Workshop on Multimodality in Multispace Interaction (MiMI), at Sunport Hall Takamatsu, Takamatsu City, Kagawa prefecture in Japan on December 1-2, 2011. The workshop was part of JSAI International Symposia on Artificial Intelligence (JSAI-isAI 2011) sponsored by the Japanese Society for Artificial Intelligence. All he papers collected here were presented at the workshop either as invited talks or as accepted papers . Incorporating discussions, comments, and questions, workshop presenters revised their papers and submit them to this proceedings . The submitted papers were peer-reviewed once again and three out of the eight papers were accepted in the end. Our special gratitude goes to the anonymous reviewers of the papers for their dedicated efforts to make very constructive and useful comments for the authors to make their papers more convincing and intriguing. Before we proceed to the papers themselves, we would like to introduce the readers to the aims and scope of MiMI 2011, by showing a piece of memo that we made in preparation of a proposal of the workshop to JSAI.


international conference on human-computer interaction | 2003

An Analysis of Participation Structure in Conversation Based on Interaction Corpus of Ubiquitous Sensor Data.

Mayumi Bono; Noriko Suzuki; Yasuhiro Katagiri


LAK Workshops | 2014

Supporting Science Communication in a Museum using Ubiquitous Learning Logs.

Hiroaki Ogata; Kousuke Mouri; Mayumi Bono; Ayami Joh; Katsuya Takanashi; Akihiko Osaki; Hiromi Ochiai


Archive | 2016

Challenges for Robots Acting on a Stage

Mayumi Bono; Perla Maiolino; Augustin Lefebvre; Fulvio Mastrogiovanni; Hiroshi Ishiguro


language resources and evaluation | 2014

A Colloquial Corpus of Japanese Sign Language: Linguistic Resources for Observing Sign Language Conversations

Mayumi Bono; Kouhei Kikuchi; Paul Cibulka; Yutaka Osugi

Collaboration


Dive into the Mayumi Bono's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobuhiro Furuyama

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Yasuhiro Katagiri

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar

Yasuyuki Sumi

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar

Ayami Joh

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shogo Okada

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Katsumi Nitta

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge