Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satoshi Nishiguchi is active.

Publication


Featured researches published by Satoshi Nishiguchi.


international conference on multimedia and expo | 2003

CARMUL: concurrent automatic recording for multimedia lecture

Yoshinari Kameda; Satoshi Nishiguchi; Michihiko Minoh

An advanced multimedia lecture recording system is presented in this paper. The purpose of our system is to capture multimodal information that can be received only when lectures are being held in a classroom where a teacher and students share the same time and space. The system captures not only hand-writings and slide switching intervals but also audio and video of the people with their spatial location information. These recorded media will be served to users in distance learning process. We designed the system so that it does not interfere with classes. It can archive lectures while they hold classes on regular basis and generate a multimedia archive automatically. Teachers are neither asked to remain at a certain place nor wired with devices they would have to put on. We have implemented our approach and our lecture archive system currently works for six classes per week, since October, 2002.


international conference on multimedia and expo | 2003

A sensor-fusion method for detecting a speaking student

Satoshi Nishiguchi; Kazuhide Higashi; Yoshinari Kameda; Michihiko Minoh

In this paper, we propose a method for detecting the location of the speaker that is a target of automatic video filming in distance learning and lecture archive. It is required that a face of a speaking student is filmed in a lecture video. For this purpose, it is necessary to detect the location of a speaker. An acoustic sensor such as a microphone array is used widely to detect the location of a sound source. However, it is difficult to detect the location of a sound source precisely using only microphone array because of sound noise in a large space such as a lecture room. In this paper, we propose a method for detecting more precise location of a speaker in the lecture room using not only the microphone array but also visual sensors. The result shows that the precision ratio of detecting the location of a speaker was improved about 20% by our sensor-fusion method.


international conference on knowledge-based and intelligent information and engineering systems | 2003

Environmental Media – In the Case of Lecture Archiving System –

Michihiko Minoh; Satoshi Nishiguchi

The environmental media that naturally record human communication activities are discussed. Since one of the most important contents is human activity in the society, it becomes important to record them naturally. We name this media as “environmental media”. In this paper, the concept of the environmental media is first discussed and the following is an example of the environmental media in the classroom situation i.e. the lecture archiving system. Through the discussions, we try to make the concept and usefulness of the environmental media clear.


ieee international conference on control system, computing and engineering | 2012

Group emotion estimation using Bayesian network based on facial expression and prosodic information

Tatsuya Hayamizu; Sano Mutsuo; Kenzaburo Miyawaki; Hiroaki Mori; Satoshi Nishiguchi; Nobuyuki Yamashita

Recently, there have been many studies on the activation of group communication, and it is important to easily and stably measure the states of group communication. We focus on group emotion expressed during conversation, and propose a method for estimating group emotion reliably using a Bayesian network based on both face image features and prosodic information. In the proposed method, group emotion is derived from estimation state values of individual emotions in the Bayesian network. The effectiveness of the proposed method was verified by performing experiments involving group conversation.


international conference on human-computer interaction | 2018

Verification of Stereoscopic Effect Induced Parameters of 3D Shape Monitor Using Reverse Perspective

Ryoichi Takeuchi; Wataru Hashimoto; Yasuharu Mizutani; Satoshi Nishiguchi

In the field of optical illusion, reverse perspective is used to draw a scene that is opposite to the actual perspective. When the viewing position is changed, a farther object seems to be always coming toward and following the viewer instead of going away. Therefore, we considered whether the reverse perspective can be applied to a dynamic representation of computer animation using multiple combined monitors. In this research, we arranged three monitors in the shape of a corner cube and tried to determine whether the viewer can recognize the concave corner of the cube as the convex corner through the reverse perspective illusion. Furthermore, we developed a virtual environment that enabled us to simulate the reverse perspective illusion by changing the position, angle, and shape of the screen using a head-mounted display and controllers.


international conference on human-computer interaction | 2018

Generating Training Images Using a 3D City Model for Road Sign Detection

Ryuto Kato; Satoshi Nishiguchi; Wataru Hashimoto; Yasuharu Mizutani

In order to prevent traffic accidents due to mistakes in checking road signs, a method for detecting road signs from an image shot by an in-vehicle camera has been developed. On the other hand, Deep Learning which is frequently used in recent years requires preparing a large amount of training data, and it is difficult to photograph road signs from various directions at various places. In this research, we propose a method for generating training images for Deep Learning using 3D urban model simulation for detecting road signs. The appearance of road signs taken in the simulation depends on the distance and direction from the camera and the brightness of the scene. These changes were applied to Japanese road signs, and 303,750 types of sign images and their mask areas were automatically generated and used for training. As a result of training YOLO detectors using these training images, in detection for some road sign class groups, the F values of 66.7% to 88.9% could be obtained.


international conference on human-computer interaction | 2017

Projection Simulator to Support Design Development of Spherical Immersive Display

Wataru Hashimoto; Yasuharu Mizutani; Satoshi Nishiguchi

This research aims to develop a simulator that supports the construction of a spherical immersive display, which is a system that can provide a realistic presence, as if the user exists in another space. In general, when developing a display, it is necessary to perform optical design of the projection system in considering special distortion correction on the dome screen. However, accuracy of the optical system that is actually manufactured is not guaranteed to be when it is simulated, and fine adjustment is again necessary when the display is used. In this research, we report on the development of a projection simulator that can perform optical system adjustment and distortion correction simultaneously during optical design of the projection system.


international conference on distributed ambient and pervasive interactions | 2015

Measuring the Arrangement of Multiple Information Devices by Observing Their User's Face

Saori Kikutani; Koh Kakusho; Takeshi Okadome; Masaaki Iiyama; Satoshi Nishiguchi

We propose to measure the 3D arrangement of multiple portable information devices operated by a single user from his/her facial images captured by the cameras installed on those devices. Since it becomes quite usual for us to use multiple information devices at the same time, previous works have proposed various styles of cooperation among the devices for data transmission and so on. Other previous works propose to coordinate the screens so that they share the role of displaying contents larger than each screen. Those previous works obtain the 2D tiled arrangement of the screens by detecting their contacts with each other using sensing hardware equipped on their edges. Our method estimates the arrangement among the devices in various 3D positions and orientations in relation to the users face from its appearance in the image captured by the camera on each device.


international conference on distributed ambient and pervasive interactions | 2015

Estimating Positions of Students in a Classroom from Camera Images Captured by the Lecturer's PC

Junki Nishikawa; Koh Kakusho; Masaaki Iiyama; Satoshi Nishiguchi; Masayuki Murakami

We propose to estimate the position of each student in a classroom by observing the classroom with a camera attached on the notebook or tablet PC of the lecturer. The position of each student in the classroom is useful to keep observing his/her learning behavior as well as taking attendance, continuously during the lecture. Although there are many previous works on estimating positions of humans from camera images in the field of computer vision, the arrangement of humans in a classroom is quite different from usual scenes. Since students in a classroom sit on closely-spaced seats, they appear with many overlaps among their regions in camera images. To cope with this difficulty, we keep observing students to capture their faces once they appear, and recover the positions in the classroom with the geometric constraint that requires those positions to be distributed on the same plane parallel to the floor.


international symposium on multimedia | 2010

Extraction of Mastication in Diet Based on Facial Deformation Pattern Descriptor

Kenzaburo Miyawaki; Satoshi Nishiguchi; Mutsuo Sano

In this paper, we describe a method for extraction of mastication from image sequence. Mastication is the first step of eating and is very important. However, people do not need strong mastication muscle because the advancement of cooking and food processing technology makes soft foods. The weak mastication ability of recent human is about to become a serious problem. It will be a risk factor of many diseases. To prevent this, analysis method of mastication is essential. Several studies have been conducted in the research field, but those were not applicable to eating in daily life, because of the various restrictions. We proposed a mastication analysis method using only monocular camera. The key point is Facial Deformation Pattern Descriptor, FDPD which can represent a pattern of facial deformation. By using the FDPD, we could extract mastication from video successfully, and could develop some useful mastication analysis system for healthy eating life.

Collaboration


Dive into the Satoshi Nishiguchi's collaboration.

Top Co-Authors

Avatar

Koh Kakusho

Kwansei Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mutsuo Sano

Osaka Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wataru Hashimoto

Osaka Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kenzaburo Miyawaki

Osaka Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hiroaki Mori

Osaka Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge