Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun Shimamura is active.

Publication


Featured researches published by Jun Shimamura.


Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704) | 2000

Construction of an immersive mixed environment using an omnidirectional stereo image sensor

Jun Shimamura; Naokazu Yokoya; Haruo Takemura; Kazumasa Yamazawa

Recently virtual reality (VR) systems have been incorporating rich information available in the real world into VR environments in order to improve their reality. This stream has created the field of mixed reality which seamlessly integrates real and virtual worlds. This paper describes a novel approach to the construction of a mixed environment. The approach is based on capturing the dynamic real world by using a video-rate omnidirectional stereo image sensor. The mixed environment is constructed of two different types of models: (1) texture-mapped cylindrical 3-D model of dynamic real scenes and (2) 3-D computer graphics (CG) model. The cylindrical 3-D model is generated from full panoramic stereo images obtained by the omnidirectional sensor that uses a pair of hexagonal pyramidal mirrors and twelve CCD cameras. A prototype system has been developed to confirm the feasibility of the proposed method, in which panoramic binocular stereo images of the mixed environment are projected on a cylindrical immersive display depending on users viewpoint in real time.


human factors in computing systems | 2003

Tangible search for stacked objects

Kensaku Fujii; Jun Shimamura; Kenichi Arakawa; Tomohiko Arikawa

The goal of Tangible Search is to more effectively support the user in physically locating one of a number of stacked objects. It consists of two operations - automatic logging of stacked objects and direct annotation; image processing is used to determine the heights of the stack and of the users finger. Tangible Search offers stable and accurate 3D analysis since it uses our previously proposed method. It employs a single camera with a compound half-mirror; this configuration also allows the top and side views of the stack to be captured simultaneously. Our approach is to make it easier for the user to handle stacks of items; it will enhance the tabletop metaphor for intuitive interaction in real-world environments where stacks are very common.


international conference on pattern recognition | 2000

Construction and presentation of a virtual environment using panoramic stereo images of a real scene and computer graphics models

Jun Shimamura; Haruo Takemura; Naokazu Yokoya; Kazumasa Yamazawa

The progress in computer graphics has made it possible to construct various virtual environments such as urban or natural scenes. The paper proposes a hybrid method to construct a realistic virtual environment containing an existing real scene. The proposed method combines two different types of 3-D models. A 3-D geometric model is used to represent virtual objects in the users vicinity, enabling a user to handle virtual objects. A texture mapped cylindrical 2.5-D model of a real scene is used to render the background of the environment, maintaining real-time rendering and increasing realistic sensation. The cylindrical 2.5-D model is generated from cylindrical stereo images captured by an omnidirectional stereo imaging sensor. A prototype mixed reality system has been developed to confirm the feasibility of the method, in which panoramic binocular stereo images are projected on a cylindrical immersive projective display depending on the users view point in real time.


international geoscience and remote sensing symposium | 2002

Interactive browsing of 3D urban model on the web

Kensaku Fujii; Jun Shimamura; Tomohiko Arikawa

This paper introduces an interactive browsing system that efficiently supports the reliable distribution of 3D GIS data including 3D urban models with texture, attributes and aerial/satellite images. Our approach uses the view-dependent simplification to reduce the number of primitives distributed and rendered. We apply to make preparations for only visible primitives concerning with each 3D viewpoint and direction in advance. This approach will make it possible to distribute only the visible data, that is, no occlusion culling in rendering. Additionally, we suggest a compression concept to control the increase in the supposed view sets. Experimental results show that our approach is stable and reliable enough to make the distribution and rendering of 3D GIS data possible.


international conference on 3d vision | 2015

Segmentation of 3D Lidar Points Using Extruded Surface of Cross Section

Hitoshi Niigaki; Jun Shimamura; Akira Kojima

We present a new unsupervised technique to segment 3D Lidar points in outdoor environments. The main idea of this work is to identify artificial objects according to the existence of extruded shapes. Many artificial objects are composed of extruded shapes such as cylinders, planes, cubes, and lines. Therefore, we detect these arbitrarily extruded shapes on the basis of an indicator for repetitive crosssection shapes, and connect the components according to the strength between the overlapping areas in the extruded surfaces. Conventional segmentation methods that use local geometry information may sometimes produce erroneous results in scenes where there are many objects that are very near to and partially in contact with each other. In contrast, our method is more robust against these complex scenes using large scale surface overlapping strength. Experiments show it provides good results in urban environments and expressway scenes.


Archive | 2006

PROJECTION IMAGE CORRECTION DEVICE, AND ITS PROGRAM

Hiroyuki Arai; Jun Shimamura; Takayuki Yasuno; 貴之 安野; 潤 島村; 啓之 新井


international conference on pattern recognition | 2012

Circular object detection based on separability and uniformity of feature distributions using Bhattacharyya Coefficient

Hitoshi Niigaki; Jun Shimamura; Masashi Morimoto


international conference on pattern recognition | 2012

A framework of three-dimensional object recognition which needs only a few reference images

Hiroko Yabushita; Jun Shimamura; Masashi Morimoto


Archive | 2006

Observation position tracking type video image providing apparatus and observation position tracking type video image providing program, and video image providing apparatus and the video image providing program

Hiroyuki Arai; Jun Shimamura; Takayuki Yasuno; 貴之 安野; 潤 島村; 啓之 新井


Archive | 2001

SYSTEM, METHOD AND SERVER FOR THREE-DIMENSIONAL CG MODEL DISTRIBUTION, PROGRAM AND PROGRAM STORAGE MEDIUM THEREOF, THREE-DIMENSIONAL CG MODEL USER CLIENT, PROGRAM AND PROGRAM STORAGE MEDIUM THEREOF, AND THREE- DIMENSIONAL CG MODEL INDEX GENERATING METHOD

Tomohiko Arikawa; Kensaku Fujii; Jun Shimamura; 潤 島村; 知彦 有川; 憲作 藤井

Collaboration


Dive into the Jun Shimamura's collaboration.

Top Co-Authors

Avatar

Haruo Takemura

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kazumasa Yamazawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Naokazu Yokoya

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yukinobu Taniguchi

Tokyo University of Science

View shared research outputs
Top Co-Authors

Avatar

Tatsuya Osawa

Nippon Telegraph and Telephone

View shared research outputs
Researchain Logo
Decentralizing Knowledge