Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomohiro Mashita is active.

Publication


Featured researches published by Tomohiro Mashita.


machine vision applications | 2006

Calibration Method for Misaligned Catadioptric Camera

Tomohiro Mashita; Yoshio Iwai; Masahiko Yachida

This paper proposes a calibration method for catadioptric camera systems consisting of a mirror whose reflecting surface is the surface of revolution and a perspective camera as typified by HyperOmni Vision. The proposed method is based on conventional camera calibration and mirror posture estimation. Many methods for camera calibration have been proposed and during the last decade, methods for catadioptric camera calibration have also been proposed. The main problem with catadioptric camera calibration is that the degree of freedom of mirror posture is limited or the accuracy of the estimated parameters is inadequate due to nonlinear optimization. On the other hand, our method can estimate five degrees of freedom of mirror posture and is free from the volatility of nonlinear optimization. The mirror posture has five degrees of freedom, because the mirror surface has a surface of revolution. Our method uses the mirror boundary and can estimate up to four mirror postures. We apply an extrinsic parameter calibration method based on conic fitting for this estimation method. Because an estimate of the mirror posture is not unique, we also propose a selection method for finding the best one. By using the conic-based analytical method we can avoid the initial value problem arising from nonlinear optimization. We conducted experiments on synthesized images and real images to evaluate the performance of our method, and discuss its accuracy.


ieee virtual reality conference | 2014

The effectiveness of an AR-based context-aware assembly support system in object assembly

Bui Minh Khuong; Kiyoshi Kiyokawa; Andrew Miller; Joseph J. La Viola; Tomohiro Mashita; Haruo Takemura

This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.


international symposium on mixed and augmented reality | 2014

Analysing the effects of a wide field of view augmented reality display on search performance in divided attention tasks

Naohiro Kishishita; Kiyoshi Kiyokawa; Jason Orlosky; Tomohiro Mashita; Haruo Takemura; Ernst Kruijff

A wide field of view augmented reality display is a special type of head-worn device that enables users to view augmentations in the peripheral visual field. However, the actual effects of a wide field of view display on the perception of augmentations have not been widely studied. To improve our understanding of this type of display when conducting divided attention search tasks, we conducted an in depth experiment testing two view management methods, in-view and in-situ labelling. With in-view labelling, search target annotations appear on the display border with a corresponding leader line, whereas in-situ annotations appear without a leader line, as if they are affixed to the referenced objects in the environment. Results show that target discovery rates consistently drop with in-view labelling and increase with in-situ labelling as display angle approaches 100 degrees of field of view. Past this point, the performances of the two view management methods begin to converge, suggesting equivalent discovery rates at approximately 130 degrees of field of view. Results also indicate that users exhibited lower discovery rates for targets appearing in peripheral vision, and that there is little impact of field of view on response time and mental workload.


ieee virtual reality conference | 2012

Human activity recognition for a content search system considering situations of smartphone users

Tomohiro Mashita; Kentaro Shimatani; Hiroki Miyamoto; Daijiro Komaki; Takahiro Hara; Kiyoshi Kiyokawa; Haruo Takemura; Shojiro Nishio

Smart-phone users can search for information about surrounding facilities or a route to their destination. However, it is difficult to get or search for information while walking because of low legibility. To address this problem, users have to stop walking or enlarge the screen. Our previously proposed system for smart-phone switches the information presentation policies in response to the users context. In this paper we describe our context recognition mechanism for this system. This mechanism estimates user context from sensors embedded in a smart-phone. We use a Support Vector Machine for the context classification and compare four types of feature values consisting of FFT and 3 types of Wavelet Transforms. Experimental results show that recognition rates are 87.2 % with FFT, 90.9 % with Gabor Wavelet, 91.8 % with Haar Wavelet, and 92.1 % with MexicanHat Wavelet.


Procedia Computer Science | 2011

A Location-based Content Search System Considering Situations of Mobile Users

Takahiro Hara; Kentaro Shimatani; Tomohiro Mashita; Kiyoshi Kiyokawa; Shojiro Nishio; Haruo Takemura

Abstract Mobile devices have become popular to access the information on-the-move. Users carry around mobile devices all the time, and thus they often search location-based contents (e.g., locations of specific spots around them, reviews about the spots, and routes to the spots) in their daily life. However, physical restrictions of mobile devices such as display size and input capabilities affect users’ operations, e.g., it is difficult to input a search query and move to another page. When searching contents by using mobile devices, users’ situations often change and this change may affect the users’ information needs. Therefore, it is effective that search systems provide users with information suitable for the users’ situations. In this paper, we present the design and implementation of a location-based content search system considering mobile users’ situations, which aims to reduce users’ load of operations in content searching when users stand or sit and can concentrate on the display to some extent. Our system decides the importance of each location-based category (whether useful for users or not) based on the users’ situations and presents the information related to high-importance categories on the menus and map. Users can get contents by only selecting menus and markers on a map. We conducted a user experiment with 11 people. The experimental result shows that users could get contents more easily using our system than using a commercial Web search system and map search system.


ubiquitous computing | 2013

A content search system considering the activity and context of a mobile user

Hiroki Miyamoto; Takahiro Hara; Daijiro Komaki; Kentaro Shimatani; Tomohiro Mashita; Kiyoshi Kiyokawa; Toshiaki Uemukai; Gen Hattori; Shojiro Nishio; Haruo Takemura

People routinely carry mobile devices in their daily lives and obtain a variety of information from the Internet in many different situations. In searching for information (content) with a mobile device, a user’s activity (e.g., moving or stationary) and context (e.g., commuting in the morning or going downtown in the evening) often change, and such changes can affect the user’s degree of concentration on his or her mobile device’s display and information needs. Therefore, a search system should provide the user with an amount of information suitable for the current activity and a type of information suitable for the current context. In this study, we present the design and implementation of a content search system that considers a mobile user’s activity and context, with the goal of reducing the user’s operation load for content search. The proposed system switches between two kinds of content search systems according to the user’s activity: the location-based content search system is activated when the user is stationary (e.g., standing and sitting), while a menu-based content search system is activated when the user is moving (e.g., walking). Both systems present information according to user context. The location-based system presents detailed information via menus and a map according to location-based categories. The menu-based system presents only a few options to enable users to get content easily. Through user experiments, we confirmed that participants could get desired information more easily with this system than with a commercial search system.


IEEE Transactions on Visualization and Computer Graphics | 2011

A Wide-View Parallax-Free Eye-Mark Recorder with a Hyperboloidal Half-Silvered Mirror and Appearance-Based Gaze Estimation

Hiroki Mori; Erika Sumiya; Tomohiro Mashita; Kiyoshi Kiyokawa; Haruo Takemura

In this paper, we propose a wide-view parallax-free eye-mark recorder with a hyperboloidal half-silvered mirror and a gaze estimation method suitable for the device. Our eye-mark recorder provides a wide field-of-view video recording of the users exact view by positioning the focal point of the mirror at the users viewpoint. The vertical angle of view of the prototype is 122 degree (elevation and depression angles are 38 and 84 degree, respectively) and its horizontal view angle is 116 degree (nasal and temporal view angles are 38 and 78 degree, respectively). We implemented and evaluated a gaze estimation method for our eye-mark recorder. We use an appearance-based approach for our eye-mark recorder to support a wide field-of-view. We apply principal component analysis (PCA) and multiple regression analysis (MRA) to determine the relationship between the captured images and their corresponding gaze points. Experimental results verify that our eye-mark recorder successfully captures a wide field-of-view of a user and estimates gaze direction with an angular accuracy of around 2 to 4 degree.


symposium on 3d user interfaces | 2013

Poster: Investigation on the peripheral visual field for information display with real and virtual wide field-of-view see-through HMDs

Naohiro Kishishita; Jason Orlosky; Tomohiro Mashita; Kiyoshi Kiyokawa; Haruo Takemura

Wide-view HMDs allow users to use their peripheral vision in a wearable augmented reality (AR) system. In this study, we evaluate the effectiveness of wide FOV see-through HMDs. Experimental results show that distribution of annotations to peripheral vision decreases the discovery rate of them especially with FOV at more than approximately 81?. Additionally, annotations with an added blinking effect have little effect on target discovery rates regardless of viewing angle.


international conference on artificial reality and telexistence | 2014

Investigation of dynamic view expansion for head-mounted displays with head tracking in virtual environments

Yuki Yano; Kiyoshi Kiyokawa; Andrei Sherstyuk; Tomohiro Mashita; Haruo Takemura

Head mounted displays (HMD) are widely used for visual immersion in virtual reality (VR) systems. It is acknowledged that the narrow field of view (FOV) for most HMD models is the leading cause of insufficient quality of immersion, resulting in suboptimal user performance in various tasks in VR and early fatigue, too. Proposed solutions to this problem range from hardware-based approaches to software enhancements of the viewing process. There exist three major techniques of view expansion; minification or rendering graphics with a larger FOV than the displays FOV, motion amplification or amplifying user head rotation aiming to provide accelerated access to peripheral vision during wide sweeping head movements, and diverging left and right virtual cameras outwards in order to increase the combined binocular FOV. Static view expansion has been reported to increase user efficiency in search and navigation tasks, however the effectiveness of dynamic view expansion is not yet well understood. When applied, view expansion techniques modify the natural viewing process and alter familiar user reflex-response loops, which may result in motion sickness and poor user performance. Thus, it is vital to evaluate dynamic view expansion techniques in terms of task effectiveness and user workload. This paper details dynamic view expansion techniques, experimental settings and findings of the user study. In the user study, we investigate three view expansion techniques, applying them dynamically based on user behaviors. We evaluate the effectiveness of these methods quantitatively, by measuring and comparing user performance and user workload in a target search task. Also, we collect and compare qualitative feedback from the subjects in the experiment. Experimental results show that certain levels of minification and motion amplification increase performance by 8.2% and 6.0%, respectively, with comparable or even decreased subjective workload.


international symposium on mixed and augmented reality | 2012

Subjective evaluations on perceptual depth of stereo image and effective field of view of a wide-view head mounted projective display with a semi-transparent retro-reflective screen

Duc Nguyen Van; Tomohiro Mashita; Kiyoshi Kiyokawa; Haruo Takemura

We report two user studies on a wearable hyperboloidal head mounted projective display (HHMPD) with a semi-transparent retro-reflective screen. First experiment revealed that a virtual image is perceived at a similar distance as the real image only when the observation distance is within 2.5m with monocular vision, whereas its threshold is further than 3m with stereo (binocular) vision. Second experiment revealed that users are able to identify visual stimuli in the periphery of the visual field up to ±50 degrees in horizontal, while paying attention to a real object in frontal direction.

Collaboration


Dive into the Tomohiro Mashita's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Plopski

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge