Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haruo Takemura is active.

Publication


Featured researches published by Haruo Takemura.


Journal of Information Processing | 2017

Virtual and Augmented Reality on the 5G Highway

Jason Orlosky; Kiyoshi Kiyokawa; Haruo Takemura

In recent years, virtual and augmented reality have begun to take advantage of the high speed capabilities of data streaming technologies and wireless networks. However, limitations like bandwidth and latency still prevent us from achieving high fidelity telepresence and collaborative virtual and augmented reality applications. Fortunately, both researchers and engineers are aware of these problems and have set out to design 5G networks to help us to move to the next generation of virtual interfaces. This paper reviews state of the art virtual and augmented reality communications technology and outlines current efforts to design an effective, ubiquitous 5G network to help to adapt to virtual application demands. We discuss application needs in domains like telepresence, education, healthcare, streaming media, and haptics, and provide guidelines and future directions for growth based on this new network infrastructure.


international symposium on mixed and augmented reality | 2017

VisMerge: Light Adaptive Vision Augmentation via Spectral and Temporal Fusion of Non-visible Light

Jason Orlosky; Peter Kim; Kiyoshi Kiyokawa; Tomohiro Mashita; Photchara Ratsamee; Yuki Uranishi; Haruo Takemura

Low light situations pose a significant challenge to individuals working in a variety of different fields such as firefighting, rescue, maintenance and medicine. Tools like flashlights and infrared (IR) cameras have been used to augment light in the past, but they must often be operated manually, provide a field of view that is decoupled from the operators own view, and utilize color schemes that can occlude content from the original scene. To help address these issues, we present VisMerge, a framework that combines a thermal imaging head mounted display (HMD) and algorithms that temporally and spectrally merge video streams of different light bands into the same field of view. For temporal synchronization, we first develop a variant of the time warping algorithm used in virtual reality (VR), but redesign it to merge video see-through (VST) cameras with different latencies. Next, using computer vision and image compositing we develop five new algorithms designed to merge non-uniform video streams from a standard RGB camera and small form-factor infrared (IR) camera. We then implement six other existing fusion methods, and conduct a series of comparative experiments, including a system level analysis of the augmented reality (AR) time warping algorithm, a pilot experiment to test perceptual consistency across all eleven merging algorithms, and an in-depth experiment on performance testing the top algorithms in a VR (simulated AR) search task. Results showed that we can reduce temporal registration error due to inter-camera latency by an average of 87.04%, that the wavelet and inverse stipple algorithms were perceptually rated the highest, that noise modulation performed best, and that freedom of user movement is significantly increased with visualizations engaged.


intelligent user interfaces | 2017

Adaptive View Management for Drone Teleoperation in Complex 3D Structures

John Thomason; Photchara Ratsamee; Kiyoshi Kiyokawa; Pakpoom Kriangkomol; Jason Orlosky; Tomohiro Mashita; Yuki Uranishi; Haruo Takemura

Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drones current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.


Journal of Bioinformatics and Computational Biology | 2017

Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature

Hironori Shigeta; Tomohiro Mashita; Junichi Kikuta; Shigeto Seno; Haruo Takemura; Masaru Ishii; Hideo Matsuda

Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.


International Journal of Educational Technology in Higher Education | 2017

Are Japanese digital natives ready for learning english online? a preliminary case study at Osaka University

Parisa Mehran; Mehrasa Alizadeh; Ichiro Koguchi; Haruo Takemura

Assessing learner readiness for online learning is the starting point for online course design. This study thus aimed to evaluate Japanese learners’ perceived e-readiness for learning English online prior to designing and developing an online EGAP (English for General Academic Purposes) course at Osaka University. A sample of 299 undergraduate Japanese students completed a translated and adapted version of the Technology Survey developed by Winke and Goertler (CALICO Journal 25(3): 482–509, 2008). The questionnaire included items about respondents’ ownership of and access to technology tools, their ability in performing user tasks from basic to advanced, their personal educational use of Web 2.0 tools, and their willingness to take online English courses. The informants were found to have personal ownership and/or adequate access to technological devices and the Internet at home or at the university. While their keyboarding skills have been reported as relatively low, the self-assessment data indicates that the participants know about general Web 2.0 tools and utilize them in daily life but not within educational settings. The students were also in general unwilling to take online courses, either fully online or blended. This finding further highlights the necessity of digital literacy training before implementing the prospective online course with a focus on EGAP.


world haptics conference | 2017

Mugginess sensation: Exploring its principle and prototype design

Kenta Hokoyama; Yoshihiro Kuroda; Ginga Kato; Kiyoshi Kiyokawa; Haruo Takemura

Recent haptic technologies represent not only geometrical and mechanical properties of virtual objects, e.g. shape, softness, and natural frequency, but also thermal and other non-contact sensations such as hotness and wetness. However, the techniques that artificially display spatial humidity like mugginess in highly humid weather have not been explored. In this study, we provide the first comprehensive investigations on mugginess sensations and prototype mugginess displays, including their cross-modal impact on temperature sensations. Our experimental results revealed that highly humid air was felt warmer than normal room air of the similar or even higher temperature when the hand was placed in a plastic container. It is considered that the warmness was felt due to more liquefaction and less vaporization between vapor in the air and water droplets on skin surface. We designed a prototype mugginess display which can quickly control the humidity by using constant humid airflow and an exhaust fan. We then conducted threshold investigation of humidity on daily exposed body parts; cheek, neck, and wrist, among which cheek was found to be the most sensitive. The results obtained in this study will expand the potential of haptic technologies and open up new possibilities of mugginess displays.


human-agent interaction | 2017

Exploring Proxemics for Human-Drone Interaction

Alexander Yeh; Photchara Ratsamee; Kiyoshi Kiyokawa; Yuki Uranishi; Tomohiro Mashita; Haruo Takemura; Morten Fjeld; Mohammad Obaid

We present a human-centered designed social drone aiming to be used in a human crowd environment. Based on design studies and focus groups, we created a prototype of a social drone with a social shape, face and voice for human interaction. We used the prototype for a proxemic study, comparing the required distance from the drone humans could comfortably accept compared with what they would require for a nonsocial drone. The social shaped design with greeting voice added decreased the acceptable distance markedly, as did present or previous pet ownership, and maleness. We also explored the proximity sphere around humans with a social shaped drone based on a validation study with variation of lateral distance and heights. Both lateral distance and the higher height of 1.8 m compared to the lower height of 1.2 m decreased the required comfortable distance as it approached.


ICAT-EGVE | 2017

A Mutual Motion Capture System for Face-to-face Collaboration.

Atsuyuki Nakamura; Kiyoshi Kiyokawa; Photchara Ratsamee; Tomohiro Mashita; Yuki Uranishi; Haruo Takemura

In recent years, motion capture (MoCap) technology to measure the movement of the body has been used in many fields. Moreover, MoCap targeting multiple people is becoming necessary in multi-user VR. It is desirable that MoCap requires no wearable devices to capture natural motion easily. Some systems require no wearable devices using an RGB-D camera fixed in the environment, but the working range of the user is limited. Therefore, in this research, proposed is a MoCap technique for a multi-user VR environment using Head Mounted Displays (HMDs), that does not limit the working range of the user nor require any wearable devices. In the proposed technique, an RGB-D camera is attached to each HMD and MoCap is carried out mutually. The MoCap accuracy is improved by modifying the depth image. A prototype system has been implemented to evaluate the effectiveness of the proposed method and MoCap accuracy has been compared with two conditions, with and without depth information correction while rotating the RGB-D camera. As a result, it was confirmed that the proposed method could decrease the number of frames with erroneous MoCap by 49% to 100% in comparison with the case without depth image conversion.


Transactions of the Virtual Reality Society of Japan | 2017

Hybrid Object and Screen Stabilized Visualization Techniques for an AR Assembly Support System

Bui Minh Khuong; Kiyoshi Kiyokawa; Tomohiro Mashita; Haruo Takemura


Transactions of the Virtual Reality Society of Japan | 2017

Camera Localization under a Variable Lighting Environment using Parametric Feature Database based on Lighting Simulation

Tomohiro Mashita; Alexander Plopski; Akira Kudo; Tobias Höllerer; Kiyoshi Kiyokawa; Haruo Takemura

Collaboration


Dive into the Haruo Takemura's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Plopski

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge