Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tetsuro Chino is active.

Publication


Featured researches published by Tetsuro Chino.


Systems and Computers in Japan | 1995

Automatic abstract generation based on document structure analysis and its evaluation as a document retrieval presentation function

Kazuo Sumita; Seiji Miike; Kenji Ono; Tetsuro Chino

An automatic abstract generation system including a document structure analyzer is described. From a document, the system extracts a text structure representing rhetorical relations among sentences and sentence chunks. The system evaluates sentence importance based on the analyzed structure and decides which sentence should be discarded from an abstract. It also attempts to generate an abstract consistent with the original text by replacing connective expressions. Generated abstracts were evaluated from two points of view: the cover rate of key sentences; and the quality as document presentation media. Both experimental results proved the generated abstracts to be valid.


eye tracking research & application | 2000

“GazeToTalk”: a nonverbal interface with meta-communication facility (Poster Session)

Tetsuro Chino; Kazuhiro Fukui; Kaoru Suzuki

We propose a new human interface (HI) system named “GazeToTalk” that is implemented by vision based gaze detection, acoustic speech recognition (ASR), and animated human-like agent CG with facial expressions and gestures. The “GazeToTalk” system demonstrates that eye-tracking technologies can be utilized to improve HI effectively by working with other non-verbal messages such as facial expressions and gestures. Conventional voice interface system have the following serious drawbacks. (1) They cannot distinct between input voice and other noise, and (2) cannot understand who is the intended hearer of each utterance. A “push-to-Wk” mechanism can be used to ease these problems, but it spoils the advantages of voice interfaces (e.g. contact-less, suitability in hand-busy situation). In real human dialogues, besides exchanging content messages, people use non-verbal messages such as gaze, facial expressions and gestures to establish or maintain conversations, or recover from problems that arise in the conversation. The “GazeToTalk” system simulates this kind of “meta-communication” facility by utilizing vision based gaze detection, ASR, and human-like agent CG. When the user intends to input voice commands, he gazes on the agent on the display in order to request to talk, just as in daily human-human dialogues. This gaze is recognized by the gaze detection module and the agent shows a particular facial expression and gestures as a feedback to establish an “eye-contact.” Then the system accepts or rejects speech input from the user depending on the state of the “eye-contact.” This mechanism allows the “GazeToTalk” system to accept only intended voice input and ignore another voices and environmental noises successfully, without forcing any arbitrary operation to the user. We also demonstrate an extended mechanism to treat more flexible “eye contact” variations. The preliminary experiments suggest that in the context of meta-communication, nonverbal messages can be utilized to improve HI in terms of naturalness, friendliness and tactfulness.


international conference on spoken language processing | 1996

A new discourse structure model for spontaneous spoken dialogue

Tetsuro Chino; Hiroyuki Tsuboi

In this paper, a new discourse structure model is proposed, and, based on the model, we report the results of an analysis of Japanese-language dialogues over the telephone. As a result, a method for describing and analyzing the structure of spontaneous spoken dialogue is provided, and some characteristics of spontaneous spoken dialogue over the telephone are clarified.


Applied Artificial Intelligence | 1999

Animated interface agent applying atms-based multimodal input interpretation

Yasuyuki Kono; Takehide Yano; Tetsuro Chino; Kaoru Suzuki; Hiroshi Kanazawa

Two requirements should be met in order to develop a practical multimodal interface system , i . e ., ( 1 ) integration of delayed arrival of data and ( 2 ) elimination of ambiguity in recognition results of each modality . This paper presents an efficient and generic methodology for interpretation of multimodal input to satisfy these requirements . The proposed methodology can integrate delayed - arrival data satisfactorily and efficiently interpret multimodal input that contains ambiguity . In the input interpretation the multimodal interpretation process is regarded as hypothetical reasoning , and the control mechanismof interpretation is formalized by applying the assumption - based truth maintenance system ( ATMS ). The proposed method is applied to an interface agent system that accepts multimodal input consisting of voice and direct indication gesture on a touch display . The systemcommunicates to the user through a human - like interface agents three - dimensional motion image with facial express...


Archive | 2016

Temporal–Spatial Collaboration Support for Nursing and Caregiving Services

Naoshi Uchihira; Kentaro Torii; Tetsuro Chino; Kunihiko Hiraishi; Sunseong Choe; Yuji Hirabayashi; Taro Sugihara

An aging population is driving a tremendous need to improve both the efficiency and quality of nursing and caregiving. Toward this end, a collaboration support system would be useful because indirect operations such as recordkeeping and communication are a significant part of healthcare work. This chapter proposes an information supervisory control model for a collaboration support system targeted at nursing and caregiving service systems; furthermore, we have developed a smart voice messaging system based on this model. We then formulate hypotheses to be examined through field tests, virtual field tests, and simulation from the perspective of information supervisory control.


international conference on universal access in human computer interaction | 2014

A Pilot Study in Using a Smart Voice Messaging System to Create a Reflection-in-Caregiving Workshop

Taro Sugihara; Yuji Hirabayashi; Kentaro Torii; Tetsuro Chino; Naoshi Uchihira

This paper describes a pilot study in terms of reflection-in-caregiving with an assistive technology employing smart messaging by Bluetooth for location identification and annotation for tweets. We conducted 3 sorts of investigations i.e. questionnaire of role stress, semi-structured interview and reflection workshop to explore potential for inducing caregivers behavior change by the assistive technology. Thereafter, we concluded that the assistive technology shows the potential of reflection and behavior change.


IFIP Working Conference on Human Work Interaction Design | 2013

Work and Speech Interactions among Staff at an Elderly Care Facility

Tetsuro Chino; Kentaro Torii; Naoshi Uchihira; Yuji Hirabayashi

We observed bathing assistance, night shift operations, and handover tasks at a private elderly care home for 8 days. We collected approximately 400 h of recorded speech, 42,000 transcribed utterances, data from an indoor location tracking system, and handwritten notes by human observers. We also analyzed speech interaction in the bathing assistance task. We found that (1) staff members are almost always speaking during tasks, (2) remote communication is rare, (3) about 75% of utterances are spoken to the residents, (4) the intended recipient of utterances is frequently switched, and (5) about 17% of utterances contain personal names. We also attempted clustering utterances into passages, and about 33% of passages contained only one person’s name. These results should be applicable in semi-automatic long-term care record taking.


Archive | 1998

Multi-modal interface apparatus and method

Tetsuro Chino; Tomoo Ikeda; Yasuyuki Kono; Takehide Yano; Katsumi Tanaka


Archive | 2006

Communication support apparatus and computer program product for supporting communication by performing translation between languages

Satoshi Kamatani; Tetsuro Chino


Archive | 1996

Intelligent multi modal communications apparatus utilizing predetermined rules to choose optimal combinations of input and output formats

Yasuyuki Kono; Tomoo Ikeda; Tetsuro Chino; Katsumi Tanaka

Collaboration


Dive into the Tetsuro Chino's collaboration.

Top Co-Authors

Avatar

Naoshi Uchihira

Japan Advanced Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuyuki Kono

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge