Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shoichiro Iwasawa is active.

Publication


Featured researches published by Shoichiro Iwasawa.


computer vision and pattern recognition | 1997

Real-time estimation of human body posture from monocular thermal images

Shoichiro Iwasawa; Kazuyuki Ebihara; Jun Ohya; Shigeo Morishima

This paper introduces a new real-time method to estimate the posture of a human from thermal images acquired by an infrared camera regardless of the back-ground and lighting conditions. Distance transformation is performed for the human body area extracted from the thresholded thermal image for the. Calculation of the center of gravity. After the orientation of the upper half of the body is obtained by calculating the moment of inertia, significant points such as the top of the head, the tips of the hands and foot are heuristically located. In addition, the elbow and foot positions are estimated from the detected (significant) points using a genetic algorithm based learning procedure. The experimental results demonstrate the robustness of the proposed algorithm and real-time (faster than 20 frames per second) performance.


People first | 1999

Real-time, 3D estimation of human body postures from trinocular images

Shoichiro Iwasawa; Jun Ohya; Kazuhiko Takahashi; Tatsumi Sakaguchi; Sinjiro Kawato; Kazuyuki Ebihara; Sigeo Morishima

This paper proposes a new real-time method for estimating human postures in 3D from trinocular images. In this method, an upper body orientation detection and a heuristic contour analysis are performed on the human silhouettes extracted from the trinocular images so that representative points such as the top of the head can be located. The major joint positions are estimated based on a genetic algorithm based learning procedure. 3D coordinates of the representative points and joints are then obtained from the two views by evaluating the appropriateness of the three views. The proposed method implemented on a personal computer runs in real-time (30 frames/second). Experimental results show high estimation accuracies and the effectiveness of the view selection process.


Proceedings of SPIE | 2012

3D image quality of 200-inch glasses-free 3D display system

Masahiro Kawakita; Shoichiro Iwasawa; M. Sakai; Yasuyuki Haino; Masahito Sato; Naomi Inoue

We have proposed a glasses-free three-dimensional (3D) display for displaying 3D images on a large screen using multi-projectors and an optical screen consisting of a special diffuser film with a large condenser lens. To achieve high presence communication with natural large-screen 3D images, we numerically analyze the factors responsible for degrading image quality to increase the image size. A major factor that determines the 3D image quality is the arrangement of component units, such as the projector array and condenser lens, as well as the diffuser film characteristics. We design and fabricate a prototype 200-inch glasses-free 3D display system on the basis of the numerical results. We select a suitable diffuser film, and we combine it with an optimally designed condenser lens. We use 57 high-definition projector units to obtain viewing angles of 13.5°. The prototype system can display glasses-free 3D images of a life-size car using natural parallax images.


ieee international conference on automatic face and gesture recognition | 2000

Human body postures from trinocular camera images

Shoichiro Iwasawa; Jun Ohya; Kazuhiko Takahashi; Tatsumi Sakaguchi; Kazuyuki Ebihara; Shigeo Morishima

This paper proposes a new real-time method for estimating human postures in 3D from trinocular images. In this method, an upper body orientation detection and a heuristic contour analysis are performed on the human silhouettes extracted from the trinocular images so that representative points such as the top of the head can be located. The major joint positions are estimated based on a genetic algorithm-based learning procedure. 3D coordinates of the representative points and joints are then obtained from the two views by evaluating the appropriateness of the three views. The proposed method implemented on a personal computer runs in real-time. Experimental results show high estimation accuracies and the effectiveness of the view selection process.


ieee international conference on automatic face and gesture recognition | 1998

Real-time human posture estimation using monocular thermal images

Shoichiro Iwasawa; Kazuyuki Ebihara; Jun Ohya; Shigeo Morishima

This paper introduces a new real-time method to estimate the posture of a human from thermal images acquired by an infrared camera regardless of the background and lighting conditions. Distance transformation is performed for the human body area extracted from the thresholded thermal image for the calculation of the center of gravity. After the orientation of the upper half of the body is obtained by calculating the moment of inertia, significant points such as the top of the head, the tips of the hands and foot are heuristically located. In addition, the elbow and knee positions are estimated from the detected (significant) points using a genetic algorithm based learning procedure. The experimental results demonstrate the robustness of the proposed algorithm and real-time (faster than 20 frames per second) performance.


ubiquitous computing | 2007

Collaborative capturing, interpreting, and sharing of experiences

Yasuyuki Sumi; Sadanori Ito; Tetsuya Matsuguchi; Sidney S. Fels; Shoichiro Iwasawa; Kenji Mase; Kiyoshi Kogure; Norihiro Hagita

This paper proposes a notion of interaction corpus, a captured collection of human behaviors and interactions among humans and artifacts. Digital multimedia and ubiquitous sensor technologies create a venue to capture and store interactions that are automatically annotated. A very large-scale accumulated corpus provides an important infrastructure for a future digital society for both humans and computers to understand verbal/non-verbal mechanisms of human interactions. The interaction corpus can also be used as a well-structured stored experience, which is shared with other people for communication and creation of further experiences. Our approach employs wearable and ubiquitous sensors, such as video cameras, microphones, and tracking tags, to capture all of the events from multiple viewpoints simultaneously. We demonstrate an application of generating a video-based experience summary that is reconfigured automatically from the interaction corpus.


Digital Holography and Three-Dimensional Imaging (2013), paper DM2A.1 | 2013

Glasses-free 200-view 3D Video System for Highly Realistic Communication

Masahiro Kawakita; Shoichiro Iwasawa; Robert Lopez-Gulliver; Mao Makino; Masaki Chikama; Mehrdad Panahpour Tehrani; Akio Ishikawa; Naomi Inoue

We investigate highly realistic communication systems using a super-multi-view three-dimensional (3D) video system. We propose and develop a glasses-free 200-view 3D display using almost 200 high-definition (HD) projector units to reconstruct such natural life-size 3D moving objects as cars and humans. We are also developing a 3D capturing system with HD camera units and researching multi-view video transmission techniques for demonstration experiments for the abovementioned highly realistic communication system.


international symposium on wearable computers | 2005

Low-stress wearable computer system for capturing human experience

Megumu Tsuchikawa; Shoichiro Iwasawa; Sadanori Ito; Kiyoshi Kogure; J.H. Hagita; Kenji Mase; Yasuyuki Sumi

This paper proposes an experience-capturing system that features low-stress -wearable computers and secure data collection. Several design requirements are discussed based on user comments concerning the preceding version of the prototype. We designed a small, lightweight wearable computer unit for capturing human-human and human-object interaction with microphones, video cameras, infrared LED tags, and trackers. We describe the design details and specifications of the prototype.


conference on computer supported cooperative work | 2012

Analyzing the structure of the emergent division of labor in multiparty collaboration

Noriko Suzuki; Tosirou Kamiya; Ichiro Umata; Sadanori Ito; Shoichiro Iwasawa

In our daily life, the interactive roles of leaders, followers, and coordinators tend to emerge from multiparty collaboration. The primary purpose of this study is to automatically predict the leading role in multiparty interaction by ubiquitous computing techniques. Even though the leading role has been predicted for an entire task, there has been little focus on evaluating how roles are reorganized during a task. To find the verbal and nonverbal cues that might predict roles, we asked neutral third parties to select the participant playing the leading role in an assembly task. We examined the correlation between behavioral data gathered during a task and third-party evaluations of the leading role player in terms of temporal alterations. The preliminary results suggest that task-oriented utterances and verification behaviors regarding progress status contribute to the prediction of the emerging and reorganized leader. Moreover, we discuss the implications of our findings for the design of applications that can enhance multiparty collaboration.


Proceedings of SPIE | 2011

3D video capturing for multiprojection type 3D display

Masahiro Kawakita; Sabri Gurbuz; Shoichiro Iwasawa; Roberto Lopez-Gulliver; Sumio Yano; Hiroshi Ando; Naomi Inoue

We have already developed glasses-free three-dimensional (3-D) displays using multi-projectors and a special diffuser screen that results in a highly realistic communication system. The system can display 70-200 inch large-sized 3-D images with full high-definition video image quality. The displayed 3-D images were, however, only computergenerated graphics or still images of actual objects. In this work, we studied a 3-D video capturing method for our multiprojection 3-D display. We analyzed the optimal arrangement of cameras for the display, and the image quality as influenced by calibration error. In the experiments, we developed a prototype multi-camera system using 30 highdefinition video cameras. The captured images were corrected via image processing optimized for the display. We successfully captured and displayed, for the first time, 3-D video of actual moving objects in our glasses-free 3-D video system.

Collaboration


Dive into the Shoichiro Iwasawa's collaboration.

Top Co-Authors

Avatar

Sadanori Ito

Tokyo University of Agriculture and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuyuki Sumi

Future University Hakodate

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Masahiro Kawakita

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Ichiro Umata

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Hiroshi Ando

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Norihiro Hagita

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge