Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haruhide Hokari is active.

Publication


Featured researches published by Haruhide Hokari.


PLOS ONE | 2013

Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

Isao Nambu; Masashi Ebisawa; Masumi Kogure; Shohei Yano; Haruhide Hokari; Yasuhiro Wada

The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.


The Scientific World Journal | 2014

EEG Channel Selection Using Particle Swarm Optimization for the Classification of Auditory Event-Related Potentials

Alejandro Gonzalez; Isao Nambu; Haruhide Hokari; Yasuhiro Wada

Brain-machine interfaces (BMI) rely on the accurate classification of event-related potentials (ERPs) and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG) signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA) and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO) algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy.


systems, man and cybernetics | 2013

Towards the Classification of Single-Trial Event-Related Potentials Using Adapted Wavelets and Particle Swarm Optimization

Alejandro Gonzalez; Isao Nambu; Haruhide Hokari; Masahiro Iwahashi; Yasuhiro Wada

The accurate detection of event-related potentials (ERPs) is of great importance to construct brain-machine interfaces (BMI) and constitutes a classification problem in which the appropriate selection of features from dense-array EEG signals and tuning of the classifier parameters are critical. In the present work, we propose a method for classifying single-trial ERPs using a combination of the Lifting Wavelet Transform (LWT), Support Vector Machines (SVM) and Particle Swarm Optimization (PSO). In particular, the LWT filters, the set of EEG channels and SVM parameters that maximize the classification accuracy are searched using PSO. We evaluate the methods performance through offline analyses on the datasets from the BCI Competitions II and III. The proposed method achieved in most cases a similar or higher classification accuracy than that achieved by other methods, and adapted wavelet basis functions and channel sets that match the time-frequency and spatial properties of the P300 ERP.


international conference on neural information processing | 2012

Adaptive modeling of HRTFs based on reinforcement learning

Shuhei Morioka; Isao Nambu; Shohei Yano; Haruhide Hokari; Yasuhiro Wada

Although recent studies on out-of-head sound localization technology have been aimed at applications in entertainment, this technology can also be used to provide an interface to connect a computer to the human brain. An effective out-of-head system requires an accurate head-related transfer function (HRTF). However, it is difficult to measure HRTF accurately. We propose a new method based on reinforcement learning to estimate HRTF accurately from measurement data and validate it through simulations. We used the actor-critic paradigm to learn the HRTF parameters and the autoregressive moving average (ARMA) model to reduce the number of such parameters. Our simulations suggest that an accurate HRTF can be estimated with this method. The proposed method is expected to be useful for not only entertainment applications but also brain-machine-interface (BMI) based on out-of-head sound localization technology.


IEEE Transactions on Audio, Speech, and Language Processing | 2006

Stereo width control using interpolation and extrapolation of time-frequency representation

Takahiro Umayahara; Haruhide Hokari; Shoji Shimada

This paper presents a method to control the stereo stage width to provide additional functionality to a stereophonic sound-field recording/reproduction system that uses two microphones and two loudspeakers. The proposed method interpolates or extrapolates the time-frequency representations of the left and right stereo signals according to the desired scaling magnification. A mathematical analysis is introduced that clarifies the scaling ability of the proposed method. Listening tests using a stereo signal consisting of a mixture of two speech signals with different time-lags are performed. The results indicate that the two stereo signals yield almost the same aural impression when the scaling magnification is under two and the following two requirements are satisfied. 1) The time-lags in the stereo signal are scaled strictly before mixing. 2) The time-lags are scaled approximately by the proposed method after mixing. The listening tests also clarified the superiority of the proposed method in terms of the sound quality with respect to conventional methods


Frontiers in Neuroscience | 2018

Improving the Performance of an Auditory Brain-Computer Interface Using Virtual Sound Sources by Shortening Stimulus Onset Asynchrony

Miho Sugi; Yutaka Hagimoto; Isao Nambu; Alejandro Gonzalez; Yoshinori Takei; Shohei Yano; Haruhide Hokari; Yasuhiro Wada

Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA.


systems, man and cybernetics | 2013

Improving the Localization Accuracy of Virtual Sound Source through Reinforcement Learning

Manabu Washizu; Shuhei Morioka; Isao Nambu; Shohei Yano; Haruhide Hokari; Yasuhiro Wada

Localization of virtual sound source is a technology that allows the reproducing of three-dimensional sounds using stereo earphones, and applications of this technology in Brain-machine interface that use auditory stimuli are being investigated. In order to achieve virtual sounds using this technology, the Head-related Transfer Function (HRTF) of the user must be measured accurately. The HRTF can be measured accurately with the appropriate placement of the microphones and measurement environment, but procuring an ideal setup is usually difficult. To overcome this, we instead attempt to obtain an accurate HRTF using reinforcement learning. We performed simulations and verified that with the proposed method the HRTF accuracy improved on 24 horizontal directions. Also, in online learning experiments, the localization accuracy was improved for 3 subjects, suggesting the validity of our method.


Electronics and Communications in Japan Part Iii-fundamental Electronic Science | 2001

Estimation of multiple talker locations using randomly positioned microphones

Kazunori Kobayashi; Haruhide Hokari; Shoji Shimada

The talker location estimation technique can be applied to the remote conference system with presence that simultaneously transmits the voice signal and the talker location information to reconstruct the acoustic field, and to the monitor system in which the focus of the television camera is automatically set to the sound source. In this paper, as a new multiple talker location estimation method, a method based on synchronous multiplication is proposed. Unique features of the proposed method include the possibility of detecting the locations of several talkers generating sounds at the same time and the suitability to real-time processing. By means of an experiment, location estimation is carried out for several talkers placed in a two-dimensional manner. It is confirmed that the proposed method is effective for estimation of multiple talker locations. Further, the method is compared with the synchronous addition method. It is demonstrated that the proposed method is superior in regard to the accuracy of the location estimation and separation of multiple talkers.


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2000

A Study on Personal Difference in the Transfer Functions of Sound Localization Using Stereo Earphones

Shohei Yano; Haruhide Hokari; Shoji Shimada


Acoustical Science and Technology | 2005

A study on switching of the transfer functions focusing on sound quality

Akihiro Kudo; Haruhide Hokari; Shoji Shimada

Collaboration


Dive into the Haruhide Hokari's collaboration.

Top Co-Authors

Avatar

Shoji Shimada

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Akihiro Kudo

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shohei Yano

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Isao Nambu

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuhiro Wada

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Toshiharu Horiuchi

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alejandro Gonzalez

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kazunori Kobayashi

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shuhei Morioka

Nagaoka University of Technology

View shared research outputs
Top Co-Authors

Avatar

Takahiro Umayahara

Nagaoka University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge