Songkran Jarusirisawad
Keio University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Songkran Jarusirisawad.
electronic imaging | 2008
Hideo Saito; Hidei Kimura; Satoru Shimada; Takeshi Naemura; Jun Kayahara; Songkran Jarusirisawad; Vincent Nozick; Hiroyo Ishikawa; Toshiyuki Murakami; Jun Aoki; Akira Asano; T. Kimura; Masayuki Kakehata; Fumio Sasaki; Hidehiko Yashiro; Masahiko Mori; Kenji Torizuka; Kouta Ino
We present a novel 3D display that can show any 3D contents in free space using laser-plasma scanning in the air. The laser-plasma technology can generate a point illumination at an arbitrary position in the free space. By scanning the position of the illumination, we can display a set of point illuminations in the space, which realizes 3D display in the space. This 3D display has been already presented in Emerging Technology of SIGGRAPH2006, which is the basic platform of our 3D display project. In this presentation, we would like to introduce history of the development of the laser-plasma scanning 3D display, and then describe recent development of the 3D contents analysis and processing technology for realizing an innovative media presentation in a free 3D space. The one of recent development is performed to give preferred 3D contents data to the 3D display in a very flexible manner. This means that we have a platform to develop an interactive 3D contents presentation system using the 3D display, such as an interactive art presentation using the 3D display. We would also like to present the future plan of this 3D display research project.
Signal Processing-image Communication | 2009
Songkran Jarusirisawad; Hideo Saito
This paper proposes a novel method for synthesizing free viewpoint video captured by uncalibrated pure rotating and zooming cameras. Neither intrinsic nor extrinsic parameters of our cameras are known. Projective grid space (PGS), which is the 3D space defined by the epipolar geometry of two basis cameras, is employed for weak camera calibration. Trifocal tensors are used to relate non-basis cameras to PGS. Given trifocal tensors in the initial frame, our method automatically computes trifocal tensors in the other frames. Scale invariant feature transform (SIFT) is used for finding corresponding points in a natural scene between the initial frame and the other frames. Finally, free viewpoint video is synthesized based on the reconstructed visual hull. In the experimental results, free viewpoint video captured by uncalibrated hand-held cameras is successfully synthesized using the proposed method.
international conference on computer vision | 2009
Songkran Jarusirisawad; Hideo Saito; Vincent Nozick
In this paper, we present a new online video-based rendering (VBR) method that creates new views of a scene from uncalibrated cameras. Our method does not require information about the cameras intrinsic parameters. For obtaining a geometrical relation among the cameras, we use projective grid space (PGS) which is 3D space defined by epipolar geometry between two basis cameras. The other cameras are registered to the same 3D space by trifocal tensors between these basis cameras. We simultaneously reconstruct and render novel view using our proposed plane-sweep algorithm in PGS. To achieve real-time performance, we implemented the proposed algorithm in graphics processing unit (GPU). We succeed to create novel view images in real-time from uncalibrated cameras and the results show the efficiency of our proposed method.
international conference on distributed smart cameras | 2009
Takahide Hosokawa; Songkran Jarusirisawad; Hideo Saito
We present an online rendering system which removes occluding objects in front of the objective scene from an input video using multiple videos taken with multiple cameras. To obtain geometrical relations between all cameras, we use projective grid space (PGS) defined by epipolar geometry between two basis cameras. Then we apply plane-sweep algorithm for generating depth image in the input camera. By excluding the area of occluding objects from the volume of the sweeping planes, we can generate the depthmap without the occluding objects. Using this depthmap, we can render the image without obstacles from all the multiple camera videos. Since we use graphics processing unit (GPU) for computation, we can achieve realtime online rendering using a normal spec PC and multiple USB cameras.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2008
Songkran Jarusirisawad; Hideo Saito
This paper proposes a method for synthesizing free viewpoint video which is captured by uncalibrated multiple cameras. Each cameras are allowed to be zoomed and rotated freely during capture. Neither intrinsic nor extrinsic parameters of our cameras are known. Projective grid space (PGS), which is the 3D space defined by the epipolar geometry of two basis cameras, is employed for calibrating dynamic multiple cameras, because geometrical relations among cameras in PGS are obtained from 2D-2D corresponding points between views. We utilize keypoint recognition for finding corresponding points in natural scene for registering cameras to PGS. Moving object is segmented via graph cut optimization. Finally, free viewpoint video is synthesized based on the reconstructed visual hull. In the experimental results, free viewpoint video which is captured by uncalibrated cameras is successfully synthesized using the proposed method.
international conference on distributed smart cameras | 2007
Songkran Jarusirisawad; Hideo Saito
This paper proposes a novel method for calibrating multiple hand-held cameras target for diminished reality application. Our method does not require any special markers or information about camera parameters. projective grid space (PGS) which is 3D space defined by epipolar geometry of two basis cameras is used for dynamic cameras calibration. Geometrical relations among cameras in PGS are obtained from 2D-2D corresponding points between views. We utilize scale invariant feature transform (SIFT) for finding corresponding points in natural scene for registering cameras to PGS. Moving object is segmented via graph cut optimization. Finally, the reconstructed visual hull is used to synthesize free viewpoint video in which unwanted or occluding object is deliberately removed. In the experimental results, free viewpoint video without unwanted object which is captured by handheld cameras is successfully synthesized using the proposed method.
Journal of Visual Communication and Image Representation | 2010
Songkran Jarusirisawad; Vincent Nozick; Hideo Saito
In this paper, we present a new online video-based rendering (VBR) method that creates new views of a scene from uncalibrated cameras. Our method does not require information about the cameras intrinsic parameters. For obtaining a geometrical relation among the cameras, we use projective grid space (PGS) which is 3D space defined by epipolar geometry between two basis cameras. The other cameras are registered to the same 3D space by trifocal tensors between these basis cameras. We simultaneously reconstruct and render novel view using our proposed plane-sweep algorithm in PGS. To achieve real-time performance, we implemented the proposed algorithm in graphics processing unit (GPU). We succeed to create novel view images in real-time from uncalibrated cameras and the results show the efficiency of our proposed method.
Progress in Informatics | 2010
Songkran Jarusirisawad; Takahide Hosokawa; Hideo Saito
대한전자공학회 기타 간행물 | 2010
Hideo Saito; Yuhki Takaya; Songkran Jarusirisawad; Yuko Uematsu; Francois de Sorbier
Archive | 2007
Songkran Jarusirisawad; Hideo Saito; Non-members
Collaboration
Dive into the Songkran Jarusirisawad's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs