Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hidehiko Shishido is active.

Publication


Featured researches published by Hidehiko Shishido.


pacific-rim symposium on image and video technology | 2013

A Trajectory Estimation Method for Badminton Shuttlecock Utilizing Motion Blur

Hidehiko Shishido; Itaru Kitahara; Yoshinari Kameda; Yuichi Ohta

To build a robust visual tracking method it is important to consider issues such as low observation resolution and variation in the target object’s shape. When we capture an object moving fast in a video camera motion blur is observed. This paper introduces a visual trajectory estimation method using blur characteristics in the 3D space. We acquire a movement speed vector based on the shape of a motion blur region. This method can extract both the position and speed of the moving object from an image frame, and apply them to a visual tracking process using Kalman filter. We estimated the 3D position of the object based on the information obtained from two different viewpoints as shown in figure 1. We evaluated our proposed method by the trajectory estimation of a badminton shuttlecock from video sequences of a badminton game.


augmented human international conference | 2016

3D Position Estimation of Badminton Shuttle Using Unsynchronized Multiple-View Videos

Hidehiko Shishido; Yoshinari Kameda; Itaru Kitahara; Yuichi Ohta

In this paper, we introduce a method to estimate 3D position of a badminton shuttle using unsynchronized multiple-view videos. The research of object tracking for sports is conducted as an application of Computer Vision to improve the tactics involved with such sports. This paper proposes a technique to stably estimate objects position by using motion blur that used be considered as observational noise in the ordinary works. Badminton shuttle has a large variation of the moving speed, the motion trajectory is unpredictable and moreover the observation size is very small. Thus, it cannot be grasped correctly with human eyes. We apply our proposed technique to badminton shuttle tracking to confirm the ability of our method to enhance the human vision. We also consider that there is some contribution to augment sports in future.


Proceedings of the 2018 Workshop on Audio-Visual Scene Understanding for Immersive Multimedia - AVSU'18 | 2018

Generation Method for Immersive Bullet-Time Video Using an Omnidirectional Camera in VR Platform

Oto Takeuchi; Hidehiko Shishido; Yoshinari Kameda; Hansung Kim; Itaru Kitahara

This paper proposes a generation method of immersive bullet-time video that continuously switches the images captured by multi-viewpoint omnidirectional cameras arranged around the subject. In ordinary bullet-time processing, it is possible to observe a point of interest (POI) at the same screen position by applying projective transformation to captured multi-viewpoint images. However, the observable area is limited by the field of view of the capturing cameras. Thus, a blank region is added to the displayed image, depending on the spatial relationship between the POI and the capturing camera. This seriously harms image quality (i.e., immersiveness). We solve this problem by applying omnidirectional cameras to bullet-time video production. Furthermore, by using the virtual reality platform for calibration of multi-viewpoint omnidirectional cameras and display of bullet-time video, fast and simple processing can be realised.


Proceedings of the 1st International Workshop on Multimedia Content Analysis in Sports - MMSports'18 | 2018

An On-site Visual Feedback Method Using Bullet-Time Video

Takasuke Nagai; Hidehiko Shishido; Yoshinari Kameda; Itaru Kitahara

This paper describes an on-site visual feedback method that executes all processes from capturing of multi-view videos to generating and displaying bullet-time videos in real-time. In order to realize the on-site visual feedback in a dynamic scene where the subject moves around, such as a sports scene, it is necessary to automatically set the target point to where an observer pays attention. We combine an RGB-D camera that detects the position of the subject with our developed bullet-time video generation method in real-time, and achieve automatic setting of the target point based on the measured 3D position. Furthermore, we incorporate a function to detect a keyframe and automatically switch the viewpoint, to enable easier and more intuitive observation.


international symposium on mixed and augmented reality | 2017

Pseudo-Dolly-In Video Generation Combining 3D Modeling and Image Reconstruction

Hidehiko Shishido; Kazuki Yamanaka; Yoshinari Kameda; Itaru Kitahara

This paper proposes a pseudo-dolly-in video generation method that reproduces motion parallax by applying image reconstruction processing to multi-view videos. Since dolly-in video is taken by moving a camera forward to reproduce motion parallax, we can present a sense of immersion. However, at a sporting event in a large-scale space, moving a camera is difficult. Our research generates dolly-in video from multi-view images captured by fixed cameras. By applying the Image-Based Modeling technique, dolly-in video can be generated. Unfortunately, the video quality is often damaged by the 3D estimation error. On the other hand, Bullet-Time realizes high-quality video observation. However, moving the virtual-viewpoint from the capturing positions is difficult. To solve these problems, we propose a method to generate a pseudo-dolly-in image by installing 3D estimation and image reconstruction techniques into Bullet-Time and show its effectiveness by applying it to multi-view videos captured at an actual soccer stadium. In the experiment, we compared the proposed method with digital zoom images and with the dolly-in video generated from the Image-Based Modeling and Rendering method.


ieee virtual reality conference | 2018

A Calibration Method for Large-Scale Projection Based Floor Display System

Chun Xie; Hidehiko Shishido; Yoshinari Kameda; Kenji Suzuki; Itaru Kitahara


The Journal of The Institute of Image Information and Television Engineers | 2018

On-site Visual Feedback System with Multi-View Video Contents

Takasuke Nagai; Hidehiko Shishido; Yoshinari Kameda; Itaru Kitahara


international conference on image processing | 2017

Calibration method for sparse multi-view cameras by bridging with a mobile camera

Hidehiko Shishido; Itaru Kitahara


international conference on big data | 2017

Proactive preservation of world heritage by crowdsourcing and 3D reconstruction technology

Hidehiko Shishido; Yutaka Ito; Youhei Kawamura; Toshiya Matsui; Atsuyuki Morishima; Itaru Kitahara


international conference on big data | 2017

Method to generate disaster-damage map using 3D photometry and crowd sourcing

Koyo Kobayashi; Hidehiko Shishido; Yoshinari Kameda; Itaru Kitahara

Collaboration


Dive into the Hidehiko Shishido's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chun Xie

University of Tsukuba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge