Satoshi Kagami
Tokyo University of Science
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Satoshi Kagami.
intelligent robots and systems | 2006
Yoko Sasaki; Satoshi Kagami; Hiroshi Mizoguchi
The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process
intelligent robots and systems | 2005
Yuki Tamai; Yoko Sasaki; Satoshi Kagami; Hiroshi Mizoguchi
This paper describes a three ring microphone array estimates the horizontal/vertical direction and distance of sound sources and separates multiple sound sources for mobile robot audition. Arrangement of microphones is simulated and an optimized pattern which has three rings is implemented with 32 microphones. Sound localization and separation are achieved by delay and sum beam forming (DSBF) and frequency band selection (FBS). From on-line experiments results of sound horizontal and vertical localization, we confirmed that one or two sounds sources could be localized with an error of about 5 degrees and 200 to 300 mm in the case of the distance of about lm. The off-line experiments of sound separation were evaluated by power spectrums in each frequency of separated sounds, and we confirmed that an appropriate frequency band could be selected by DSBF and FBS. The system can separate 3 different pressure speech sources without drowning out.
systems, man and cybernetics | 2007
Kazuhiko Shinagawa; Yutaka Amemiya; Hiroshi Takemura; Satoshi Kagami; Hiroshi Mizoguchi
This paper describes the three-dimensional simulation and measurement of the sound pressure distribution generated by 120 ch plane loudspeaker array. The authors research the local high sound pressure distribution generated by loudspeaker array. We develop the 120 ch plane loudspeaker array. The 120 plane loudspeakers are arranged at the 120 ch plane loudspeaker array. The authors simulate the three dimensional sound pressure distribution generated by the 120 ch plane loudspeaker array. From the result of the simulation, the 120 ch plane loudspeaker array can generate three dimensional sound pressure distribution. The 120 ch plane loudspeaker array is developed based on the simulation result. Actual measurement of sound pressure distribution is performed with 64 ch microphones measurement system. We confirm that the 120 ch plane loudspeaker array can generate three dimensional local high sound pressure area from result of measurement. In addition, result of simulation is compared with result of measurement. The simulation is available for the simulation of the 120 ch plane loudspeaker array from result of comparison.
ieee sensors | 2004
Satoshi Kagami; Y. Takahashi; Koichi Nishiwaki; M. Mochimaru; Hiroshi Mizoguchi
This paper describes a 32 /spl times/ 32 matrix scan type distributed force sensor for humanoid robot foot which is developed for 1 kHz sampling rate. It is an analog version of the key matrix scan like sensor, and there are many efforts to achieve distributed tactile sensing by using this scheme. A thin (0.6 mm) force sensing resistance rubber sheet for this purpose is developed in order to achieve high speed sensing. Each sensing area is 4.2 /spl times/ 7.0 mm and can measure approximately 0.25-20 N. The walking cycle of the humanoid robot as well as the human being is about 0.4-0.8 s and the dual leg phase is about 0.1-0.15 s. The sensor is utilized for biped walk stabilization so that high-speed input is important. A Schottky diode is adopted for each sensing element to prevent the interference effect of other sensing areas. An air-flow based calibration system solves analog differences of the circuit and elements. The sensor system, evaluation results, and experiments using humanoid type robot are described.
systems, man and cybernetics | 2002
Hiroshi Mizoguchi; Tomohiko Kanamori; Satoshi Kagami; K. Hirao; Masaru Tanaka; Takaomi Shigehara; Taketoshi Mishima
This paper presents a working prototype system of a novel computer-human interface, named invisible messenger. It integrates visual face detection and tracking, and speaker array signal processing. By speaker array it is possible to form acoustic focus at the arbitrary location that is measured by the face tracking. Thus the implemented system can whisper in a persons ear as if an invisible virtual messenger were standing by the person. In this paper, the authors describe both ideas behind the system and key technology for the implementation. In order to confirm the effectiveness of the system, the authors conduct experiments using it. This paper also reports the experiment. Experimental remits demonstrate the effectiveness of the proposed idea.
intelligent robots and systems | 2008
Yoko Sasaki; Satoshi Kagami; Hiroshi Mizoguchi; Tadashi Enomoto
This paper describes a speech recognition system that detects basic voice commands for a mobile robot operating in a home space. The system recognizes arbitrary timed speech with position information in a noisy housing environment. The microphone array is attached to the ceiling, and localizes sound source direction in azimuth and elevation, then separates multiple sound sources using delay and sum beam forming (DSBF) and frequency band separation (FBS) algorithm. We implement the sound localization and separation method on our 32 channel microphone array. The separated sound source is recognized using an open source speech recognizer. These sound localization, separation and recognition functions are implemented as online processing in real world. We define four indices to evaluate the performance of the recognition system, and the efficiency in a noisy environment or with distant sound sources is confirmed from experiments in varied conditions. Finally, an application for a mobile robot interface is reported.
ieee sensors | 2007
Masato Takahashi; Koichi Nishiwaki; Satoshi Kagami; Hiroshi Mizoguchi
Biped walking causes translational acceleration and vibration on the torso where an attitude sensor system is usually implemented. The accuracy of measurement tends to be low because of those bad influences. We propose a method of attitude estimation that takes translational acceleration into account. Planned translational acceleration is derived from generated walking pattern, and subtracted from the output of the accelerometers, so that the effect of acceleration caused by robot motion is cancelled when the direction of gravity acceleration is estimated. An attitude measuring system which consists of 3 fiber optic gyroscopic sensors and 3 servo accelerometers was developed. The system was attached on a full size humanoid HRP-2, and the proposed method was implemented. The performance of the proposed method was evaluated during normal walking of HRP-2.
systems, man and cybernetics | 2005
Yoko Sasaki; Yuki Tamai; Satoshi Kagami; Hiroshi Mizoguchi
The purpose of this paper is to report the devised arrangement of a microphone array suitable for a mobile robot and to develop a robotic audition system to recognize the environment. The paper first describes the sum and delay beam forming (SDBF) algorithm and its common problem: side lobes. The array we developed shows smaller side lobes when beam forming. It provides high quality localization and separation for multiple sound sources. Then we achieved a sound sources mapping system by using a wheeled robot equipped with the microphone array. The robot localizes sound direction on the run and estimates sound positions using triangulation. Accumulation of data results in high accuracy. The system can estimate 3 different pressure sounds with a 200 mm position error. Moreover, the high quality sound source separation has proved useful in improving speech recognition.
international conference on advanced intelligent mechatronics | 2007
Yoko Sasaki; Satoshi Kagami; Hiroshi Mizoguchi
The paper proposes multiple sound sources localization method using directional pattern of a microphone array. Directional localization is achieved by using delay and sum beam forming (DSBF) and proposed main-lobe canceling method (MLC) using microphone directional pattern. The system subtracts higher intensity sound sources by using the directional pattern of a microphone array at each frequency, then it can localize multiple different pressure sound sources sequentially. We developed the 32ch microphone array to demonstrate the efficacy of proposed method. The design of the microphone array by beam forming simulation increases the resolution of the localization procedure and its robustness to ambient noise. The octagonal array we developed achieved lower side-lobes during beam forming. We implement MLC on a mobile robot equipped with the octagonal microphone array. The experimental result shows that the system provides accurate and robust sound localization for multiple different sound sources during moving.
The Proceedings of JSME annual Conference on Robotics and Mechatronics (Robomec) | 2008
Philipp Michel; Joel E. Chestnutt; Satoshi Kagami; Koichi Nishiwaki; James J. Kuffner; Takeo Kanade
We have accelerated a robust model-based 3D tracking system by programmable graphics hardware to run online at frame-rate during operation of a humanoid robot and to efficiently auto-initialize. The tracker recovers the full 6 degree-of-freedom pose of viewable objects relative to the robot. Leveraging the computational resources of the GPU for perception has enabled us to increase our tracker’s robustness to the significant camera displacement and camera shake typically encountered during humanoid navigation. We have combined our approach with a footstep planner and a controller capable of adaptively adjusting the height of swing leg trajectories. The resulting integrated perception-planning-action system has allowed an HRP-2 humanoid robot to successfully and rapidly localize, approach and climb stairs, as well as to avoid obstacles during walking.
Collaboration
Dive into the Satoshi Kagami's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs