Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuji Nagashima is active.

Publication


Featured researches published by Yuji Nagashima.


Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction | 1997

The Recognition Algorithm with Non-contact for Japanese Sign Language Using Morphological Analysis

Hideaki Matsuo; Seiji Igi; Shan Lu; Yuji Nagashima; Yuji Takata; Terutaka Teshima

This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing the space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns.


IEICE Transactions on Information and Systems | 2006

Digital Encoding Applied to Sign Language Video

Kaoru Nakazono; Yuji Nagashima; Akira Ichikawa

We report a specially designed encoding technique for sign language video sequences supposing that the technique is for sign telecommunication such as that using mobile videophones with a low bitrate. The technique is composed of three methods: gradient coding, precedence macroblock coding, and not-coded coding. These methods are based on the idea to distribute a certain number of bits for each macroblock according to the evaluation of importance of parts of the picture. They were implemented on a computer and encoded data of a short clip of sign language dialogue was evaluated by deaf subjects. As a result, the efficiency of the technique was confirmed.


GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction | 2001

Notation System and Statistical Analysis of NMS in JSL

Kazuyuki Kanda; Akira Ichikawa; Yuji Nagashima; Yushi Kato; Mina Terauchi; Daisuke Hara; Masanobu Sato

To describe non-manual signals (NMSs) of Japanese Sign Language (JSL), we have developed the notational system sIGNDEX. The notation describes both JSL words and NMSs. We specify characteristics of sIGNDEX in detail. We have also made a linguistic corpus that contains 100 JSL utterances. We show how sIGNDEX successfully describes not only manual signs but also NMSs that appear in the corpus. Using the results of the descriptions, we conducted statistical analyses of NMSs, which provide us with intriguing facts about frequencies and correlations of NMSs.


international conference on human-computer interaction | 2014

Study into Methods of Describing Japanese Sign Language

Keiko Watanabe; Yuji Nagashima; Mina Terauchi; Naoto Kato; Taro Miyazaki; Seiki Inoue; Shuichi Umeda; Toshihiro Shimizu; Nobuyuki Hiruma

This paper proposes a new NVSG element model with a focus on the linguistic structure of sign language. Morphemes in sign language consist of elements such as hand shape , movement and line of sight. An NVSG element description method is a method of describing morphological structure in an independent hierarchical structure.


international symposium on broadband multimedia systems and broadcasting | 2016

Provision of emergency information in sign language CG animation over integrated broadcast-broadband system

Tsubasa Uchida; Shuichi Umeda; Makiko Azuma; Taro Miyazaki; Naoto Kato; Nobuyuki Hiruma; Seiki Inoue; Yuji Nagashima

As part of an effort to expand broadcasting services based on Japanese Sign Language (JSL), we are investigating various types of JSL-based service for delivering information to people whose first language is JSL. In this report, we present a JSL computer graphics (CG) prototype system using Japans integrated broadcast-broadband (IBB) framework, Hybridcast. Delivery of JSL CG contents using the Internet enables JSL CG animation to be displayed in conjunction with a broadcast program on second screen devices such as smartphones and tablets. The signals providing emergency information are triggered by reception of an event message (EM) embedded in the transport stream (TS) transmitted over the air. Testing showed that emergency earthquake information can be provided by JSL CG animation within 2 seconds of EM reception. It also showed that local government evacuation instructions corresponding to the users residence (based on location information stored in the TV set) can be displayed on a second screen device. Despite the dependence of the drawing speed on the performance of the hardware and web browser, the proposed system is a promising way to provide information services in JSL.


international conference on human-computer interaction | 2016

A Support Tool for Analyzing the 3D Motions of Sign Language and the Construction of a Morpheme Dictionary

Yuji Nagashima; Keiko Watanabe; Mina Terauchi; Naoto Kato; Tsubasa Uchida; Shuichi Umeda; Taro Miyazaki; Makiko Azuma; Nobuyuki Hiruma

The present paper describes a support system for analyzing and notating the three-dimensional (3D) motions of sign language unit of frames that are obtained through optical motion capturing. The 3D motion data acquired involve Manual Signals (MS) and Non-Manual Markers (NMM). The 3D motion data acquired have two basic parts MS and NMM. This system enables users to analyze and describe both MS and NMM, which are the components of sign, while playing back the motions as a sign animation. In the analysis part, being able to step through a motion frame by frame, forward and backward, would be extremely useful for users. Moreover, the motions can be observed from given directions and by a given magnification ratio. In the description part, NVSG model, the sign notation system we propose, is used. The results of the description serve as a database for a morpheme dictionary, because they are stored in SQLite format. The dictionary that enables sign language to be looked up based on the motions and motions to be observed based on the morphemes is the first of its type ever created, and its usability and practicability are extremely high.


international conference on computers helping people with special needs | 2010

Development of universal communication aid for emergency using motion pictogram

Mari Kakuta; Kaoru Nakazono; Yuji Nagashima; Naotsune Hosono

VUTE is a communication aid technology that aims to take away the communication barrier for people with hard of hearing related to old age, deaf people and people traveling abroad. VUTE uses motion pictograms and can be used without prior training. We created a prototype of VUTE (VUTE 2009), a system that can be used in emergency situations. This paper describes the design concept, overview and evaluation of VUTE 2009.


international conference on computers helping people with special needs | 2010

Context analysis of universal communication through local sign languages applying multivariate analysis

Naotsune Hosono; Hiromitsu Inoue; Yuji Nagashima

This paper discusses universal communication with ICONs or pictograms in the field of assistive technology (AT) with human centred design (HCD) and context analysis by Persona model. Typical two personas are created as a deaf person in an emergency and a travelling woman from Hongkong. Then scenarios like diary are written and about 40 words are selected as minimum communication words in the dialogue. Several local sign languages of related selected words are referred in order to investigate universal signs. For this purpose a sensory evaluation method with multivariate analysis is applied. The outcome is plotted on one plane with relationships of subjects and samples of several local sign languages. Through proposed method by sensory evaluation, the relationship between fundamental words and local sign languages are initially explained.


international conference on computers helping people with special needs | 2006

Evaluation of effect of delay on sign video communication

Kaoru Nakazono; Yuji Nagashima; Mina Terauchi

Evaluation tests of sign communication with delayed video are reported and the effect of delay on the communication is discussed. The authors constructed the delayed sign dialogue experimental system. Five kinds of tasks were assigned to deaf subjects and videos of performing the task with various delay times were recorded. By analyzing the data, sign communications was found to be more tolerant of the delay time than voice communication


systems, man and cybernetics | 2006

Study of Japanese Sign Language Semantic Process based on N400 Analysis

Hisaya Tanaka; Yuji Nagashima

We study efficient automatic generation and communication of the Japanese sign language image. Therefore, it is necessary to study language processing mechanism of the sign language of the deaf people. This study describes the method for examining semantic analysis process of sign language using N400 which is a component of event related potentials. From two experiments, N400 amplitude was high in a specific semantic processing. Moreover, it was higher in the character presentation than in the sign language presentation. It is considerated that the the acceptance or the set are different according to type of language or modality though the same content is presented.

Collaboration


Dive into the Yuji Nagashima's collaboration.

Top Co-Authors

Avatar

Hisaya Tanaka

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar

Hideto Ide

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiromitsu Inoue

Chiba Prefectural University of Health Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge