Sayaka Yoshizu
Toyota
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sayaka Yoshizu.
augmented human international conference | 2014
Shunsuke Koyama; Yuta Sugiura; Masa Ogata; Anusha Withana; Yuji Uema; Makoto Honda; Sayaka Yoshizu; Chihiro Sannomiya; Kazunari Nawa; Masahiko Inami
This paper proposes a multi-touch steering wheel for in-car tertiary applications. Existing interfaces for in-car applications such as buttons and touch displays have several operating problems. For example, drivers have to consciously move their hands to the interfaces as the interfaces are fixed on specific positions. Therefore, we developed a steering wheel where touch positions can correspond to different operating positions. This system can recognize hand gestures at any position on the steering wheel by utilizing 120 infrared (IR) sensors embedded in it. The sensors are lined up in an array surrounding the whole wheel. An Support Vector Machine (SVM) algorithm is used to learn and recognize the different gestures through the data obtained from the sensors. The gestures recognized are flick, click, tap, stroke and twist. Additionally, we implemented a navigation application and an audio application that utilizes the torus shape of the steering wheel. We conducted an experiment to observe the possibility of our proposed system to recognize flick gestures at three positions. Results show that an average of 92% of flick could be recognized.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2013
Sachi Mizobuchi; Mark H. Chignell; David Canella; Moshe Eizenman; Sayaka Yoshizu; Chihiro Sannomiya; Kazunari Nawa
We conducted an experiment with 22 participants to investigate the effect of presentation style of a secondary task on a 1-D tracking task that simulated gap control in driving. Participants operated the tracking task with a foot pedal while performing a secondary task (counting vowels in a list of multiple letters) under conditions involving different modalities (audio/ visual), presentation styles (simultaneous/ sequential), task complexity (the number of distractors), and time dependency (list length). Our results showed that audio conditions with a longer and/or more complex secondary task did not improve primary (tracking) task performance, even though eye gaze dwelling time on the primary monitor in these cases tended to be substantially longer than the corresponding times in visual conditions. For a more complex version of the secondary task (longer list lengths) visual presentation of the task all at once (simultaneously) led to better performance then sequential presentation (whether visual or auditory). When given a choice people also tended to prefer simultaneous visual presentation of the secondary task. We discuss the effect of presentation modality of the secondary task in terms of its implications for user interface design in vehicles.
Archive | 2011
Sayaka Yoshizu
Archive | 2009
Naoki Ihara; Yuki Kimura; Yoshinori Yokoyama; Sayaka Yoshizu; 直樹 井原; 沙耶香 吉津; 雄喜 木村; 好紀 横山
15th World Congress on Intelligent Transport Systems and ITS America's 2008 Annual MeetingITS AmericaERTICOITS JapanTransCore | 2008
Sayaka Yoshizu; Haruki Oguri; Tsuneo Miyakoshi
Archive | 2012
Nobuhiro Mizuno; 伸洋 水野; Hirotoshi Iwasaki; 弘利 岩崎; Satomi Yoshioka; 里見 吉岡; Sayaka Yoshizu; 沙耶香 吉津; Hirotaka Nakajima; 弘貴 中島
Archive | 2012
Sayaka Yoshizu
Archive | 2011
Takao Suzuki; 隆夫 鈴木; Kensuke Hanaoka; 健介 花岡; Sayaka Yoshizu; 沙耶香 吉津; Hironobu Sugimoto; 浩伸 杉本; Hideaki Miyazaki; 英明 宮崎; Shoji Kamioka; 昇二 上岡; Hiroshi Takeuchi; 博 竹内; Toshibumi Obayashi; 俊文 尾林; Koji Suzumiya; 功之 鈴宮; Yoichi Nomoto; 洋一 野本; Ichiro Usami; 一郎 宇佐見
Archive | 2011
Sayaka Yoshizu
Archive | 2009
Sayaka Yoshizu; Yoshinori Yokoyama; Naoki Ihara; Yuki Kimura