Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroki Minagawa is active.

Publication


Featured researches published by Hiroki Minagawa.


international conference on computers helping people with special needs | 2010

Extraction of displayed objects corresponding to demonstrative words for use in remote transcription

Yoshinori Takeuchi; Hajime Ohta; Noboru Ohnishi; Daisuke Wakatsuki; Hiroki Minagawa

A previously proposed system for extracting target objects displayed during lectures by using demonstrative words and phrases and pointing gestures has now been evaluated. The system identifies pointing gestures by analyzing the trajectory of the stick pointer and extracts the objects to which the speaker points. The extracted objects are displayed on the transcribers monitor at a remote location, thereby helping the transcriber to translate the demonstrative word or phrase into a short description of the object. Testing using video of an actual lecture showed that the system had a recall rate of 85.7% and precision of 84.8%. Testing using two extracted scenes showed that transcribers replaced significantly more demonstrative words with short descriptions of the target objects when the extracted objects were displayed on the transcribers screen. A transcriber using this system can thus transcribe speech more easily and produce more meaningful transcriptions for hearing-impaired listeners.


systems, man and cybernetics | 2002

Visual communication with dual video transmissions for remote sign language interpretation services

Nobuko Kato; Ichiro Naito; Hiroshi Murakami; Hiroki Minagawa; Yasushi Ishihara

In this paper, we discuss the number of images and their contents required for a good remote sign language interpretation service. Our approach uses dual-video transmission which gives interpreters full view of the remote place. For example, the interpreter can see the face of a hearing speaker giving a presentation and the faces of the deaf audience at the same time. Evaluation experiments were conducted for two kinds of scenes, a conversation scene and a lecture scene. The experimental system of remote sign language interpretation service and subjective evaluations are described. A questionnaire survey has shown dual-video transmission to be highly practical for remote sign language interpretation services.


international conference on computers helping people with special needs | 2002

The User Interface Design for the Sign Language Translator in a Remote Sign Language Interpretation System

Hiroki Minagawa; Ichiro Naito; Nobuko Kato; Hiroshi Murakami; Yasushi Ishihara

In recent years, broadband networks have spread quickly and sufficient communication bandwidth for sign language video communication is now available. Because of this, remote sign language interpretation carried out in real time through networks, from distant locations, is being realized in Japan.


international conference on computers helping people with special needs | 2012

A system for matching mathematical formulas spoken during a lecture with those displayed on the screen for use in remote transcription

Yoshinori Takeuchi; Hironori Kawaguchi; Noboru Ohnishi; Daisuke Wakatsuki; Hiroki Minagawa

A system is described for extracting and matching mathematical formulas presented orally during a lecture with those simultaneously displayed on the lecture room screen. Each mathematical formula spoken by the lecturer and displayed on the screen is extracted and shown to the transcriber. Investigation showed that, in a lecture in which many mathematical formulas were presented, about 80% of them were both spoken and pointed to on the screen, meaning that the system can help a transcriber correctly transcribe up to 80% of the formulas presented. A speech recognition system is used to extract the formulas from the lecturers speech, and a system that analyzes the trajectory of the end of the stick pointer is used to extract the formulas from the projected images. This information is combined and used to match the pointed-to formulas with the spoken ones. In testing using actual lectures, this system extracted and matched 71.4% of the mathematical formulas both spoken and displayed and presented them for transcription with a precision of 89.4%.


international conference on computers for handicapped persons | 2014

Captioning System with Function of Inserting Mathematical Formula Images

Yoshinori Takeuchi; Yuji Sato; Kazuki Horiike; Daisuke Wakatsuki; Hiroki Minagawa; Noboru Ohnishi

We propose a captioning system with a function of inserting mathematical formula images. [We match/The system matches?] mathematical formulas presented orally during a lecture with those simultaneously projected on a screen in the lecture room. We then manually extract the mathematical formula images from the screen for displaying on the monitor of the system. A captionist can input mathematical formulas by pressing a corresponding function key. This is much easier than inputting mathematical formulas by typing. We conducted an experiment in which participants evaluated the usefulness of the proposed captioning system. Experimental results showed that 14 of the 22 participants could input more sentences when using the function of inserting mathematical formula images than when not using it. Furthermore, from the results of a questionnaire, we could confirm that the proposed system is effective.


international conference on computers helping people with special needs | 2012

Meeting support system for the person with hearing impairment using tablet devices and speech recognition

Makoto Kobayashi; Hiroki Minagawa; Tomoyuki Nishioka; Shigeki Miyoshi

In this paper, we propose a support system for hearing impaired person who attends a small meeting in which other members are hearing people. In such a case, to follow a discussion is difficult for him/her. To solve the problem, the system is designed to show what members are speaking in real time. The system consists of tablet devices and a PC as a server. The PC equips speech recognition software and distributes the recognized results to tablets. The main feature of this system is a method to correct initial speech recognition results that is considered not to be perfectly recognized. The method is handwriting over the tablet device written by meeting members themselves, not by supporting staffs. Every meeting member can correct every recognized result in any time. By this means, the system has possibility to be low cost hearing aids because it does not require extra support staffs.


international conference on computers for handicapped persons | 2004

Sign Language Communication Support at the Class for Hearing Impaired Using a Control Camera

Hiroki Minagawa

Sign language is a visual language, which is difficult to share the information without their view. In the class of Tsukuba College of Technology and the other school for the deaf, there are many efforts to share the sign language presented by a deaf student with other participants. So we attempted a new method to share the sign language by using a camera and a monitor screen. We got some results as “a lecturer and a student can be simultaneously put into a view”, “there are few mental and physical loads”. However, some problems were left such as the camera control and the way to take and show the sign language video.


JOURNAL OF THE FLOW VISUALIZATION SOCIETY OF JAPAN | 2004

Visualization Support of Remote Sign Language Interpreting System on Lecture

Ichiro Naito; Nobuko Kato; Hiroki Minagawa; Tomoyuki Nishioka; Hiroshi Murakami; Sumihiro Kawano; Mayumi Shirasawa; Shigeki Miyoshi; Yasushi Ishihara

Recently it has become possible for persons with hearing impairments in remote locations to communicate via sign language using video phones and videoconferencing systems. Video interpreting makes use of videoconferencing technology to allow remote sign language interpreting services to occur without an interpreter on site. In this paper, we describe our experimental system for the remote sign language interpreting services of lecture. And we discuss that a sign language interpreter, in a remote sign language interpreting services, can realize more effective interpretation by visualization support in lecture.


Ieej Transactions on Electronics, Information and Systems | 2000

A Support System for Visually Impaired Persons using Three-Dimensional Virtual Sound

Yoshihiro Kawai; Makoto Kobayashi; Hiroki Minagawa; Masayuki Miyakawa; Fumiaki Tomita


TCT Education of Disabilities | 2002

Computer education and assistive equipment for hearing impaired people

Hiroshi Murakami; Hiroki Minagawa; Tomoyuki Nishioka; Yutaka Shimizu

Collaboration


Dive into the Hiroki Minagawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nobuko Kato

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasushi Ishihara

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shigeki Miyoshi

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sumihiro Kawano

National University Corporation Tsukuba University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge