Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Masayuki Nakajima is active.

Publication


Featured researches published by Masayuki Nakajima.


cyberworlds | 2014

A New Virtual Museum Equipped with Automatic Video Content Generator

Masaki Hayashi; Steven Bachelder; Masayuki Nakajima; Akihiko Iguchi

Virtual Museum service has been carried out in many places owing to the advanced video and network technology in recent years. In the virtual museum, people primarily experience the prepared content actively with a mouse, a touch panel and specially designed tangible devices. On the other hand, in a real museum space, people appreciate the artifacts passively walking around the space freely without stress. It can be said that the virtual museum is designed to urge people to deal with it rather actively when compared to the real museum. We have been studying and developing a new type of virtual museum enabling people to participate the space with both active and passive way, by implementing various new functions. In this time, we developed the new virtual museum equipped with a video content generator using the virtual exhibition space modeled with 3D computer graphics (CG). This video content is created in real-time by using the 3DCG-modeled museum space as it is, adding appropriate visual and audio effects such as camerawork, superimposing text, synthesized voice narration, back ground music etc. Since this system is working in the 3DCG space, a user can easily go back and forth between the two modes of watching the video content passively and doing walkthrough in the space actively by a wheel mouse. In this paper, we first introduce primary virtual museums in the world. Then, we describe our method: 1) specially designed walkthrough algorithm, 2) the video content generator using the 3DCG museum space and 3) seamless integration of the 1) and the 2). We then describe our functioning prototype followed by the conclusion and the future plans.


international symposium on broadband multimedia systems and broadcasting | 2016

An attempt of mimicking TV news program with full 3DCG — Aiming at the Text-Generated TV system

Masaki Hayashi; Yoshiaki Shishikui; Steven Bachelder; Masayuki Nakajima

We have been studying and developing a new television system based on a methodology which delivers text-based scripts through the Internet representing visual contents instead of transmitting visual content in a format of video data. Such a Text-Generated TV System is achieved by the technology called TVML (TV program Making Language) enabling to create TV-program-like computer graphics animation automatically from script. One of the problems in our development is how to establish the scheme of TV program production with TVML to get quality contents like in a real TV broadcast. Our approach to the problem is to analyze the real TV program and to mimic it with TVML. We first choose a reference TV program then analyze it in many aspects. Based on the acquired knowledge, we then transfer those findings to CG production made using TVML to make a copy of the original TV show as faithful as possible. The objective of the attempt is to reveal the essential elements which would be needed to obtain a full-CG TV program with quality. In this paper, we describe the mimicking process of the news show, the result of the TVML production with a comparison to the original and the evaluation result that we conducted with testers. Finally, we discuss the pros and cons of the CG production applied to produce news show and have shed light on the problems.


cyberworlds | 2013

Interactive TV by Text-To-Vision

Masaki Hayashi; Steven Bachelder; Matéo Grippon; Masayuki Nakajima

We have been developing T2V (Text-To-Vision) technology which enables to produce CG animation from text. This technology is built on a TVML (TV program Making Language) core engine which produces TV-program-like animation from text input using real-time CG, voice synthesizing technique and so on. Recently, we have ported the TVML engine on a game engine UNITY. The TVML SDK on UNITY is provided to game developers enabling to make various interactive applications using T2V technology. Our aim is to integrate two different media: TV and Game to make new type of media named gaming TV owing to the T2V technology. In this paper, as one of the functioning developments of gaming TV, we will introduce Interactive TV application using the SDK. In this application, a user can interrupt the on-going show at any time to have a dialogue with the actors in the show, then return to the point that had been interrupted and resume the show. We have confirmed the potential of gaming TV through the user experience of our functioning development.


cyberworlds | 2013

Engagement in Computer and Video Games

Steven Bachelder; Rajesh Santhanam; Masaki Hayashi; Masayuki Nakajima

This poster gives an introductory summery of one of the approaches in a multi-nodal framework. The approach presented here is the Study of physiological responses of in-game player experience for engagement. Other components of a multi-nodal approach that are not included in this poster are, literature studies, the study of player accounts of the play experience and the study of in-game metric data in computer and video games. The approach described here is the study of real-time data acquisition from physiological responses generated during play sessions. The data acquisition is generated through, Eye-tracking, using a Tobii X2-30 Eye-tracker, temperature and galvanism in fingers, EEG, with the Emotive EEG, Neuroheadset, EPOC Neuroheadset and 3D Brain Activity Map and by measuring heart rate variability. The real-time physiological data acquisition from the play session is conducted and displayed together with the game on a “4K, extreme high-resolution 3840 × 2160 pixel display system. This allows users to record the play-session and the acquisition data all on one display. This provides optimized access to synchronized data when recorded sessions are later under review for analysis. The display is 4 times larger than conventional HD displays and configuration by using a generic PC Quad-CORE equipped computer with an AMD V7900 graphics card running on the Microsoft Windows 7 OS. The four Display-Port outputs from the card are converted to four DVIs, which are connected to the DM-3410-A, 4K monitor manufactured by Astrodesign Inc. The physiological data acquisition is operated by the PC and displayed in the seven fields on the screen as shown in fig 1. This method of studying the physiological responses of in-game player experience provides a large spectrum of data. This data together with the other approaches in a multi-nodal framework can help researchers and designers in gaining further knowledge about correlations between player engagement and computer and video games.


cyberworlds | 2015

Open Framework Facilitating Automatic Generation of CG Animation from Web Site

Masaki Hayashi; Steven Bachelder; Masayuki Nakajima

We have been studying and developing the system which enables to generate Computer Graphics Animation (CGA) automatically by processing HTML data of Web site. In this paper, we propose an open framework to facilitate this. The framework is functioning all at a server side, obtaining the HTML, converting it to a script describing the CGA story and updating the script. And at a client side, a user accesses the script on the server to visualize it by using real-time CG character with synthesized voice, camera work, superimposing, sound file playback etc. We have constructed the framework on the server and deployed the substantial engines to convert Web sites to CGAs. This paper describes the detail of the framework and also shows some example projects providing automatically generated News show, Talk show and personal Blog visualization.


cyberworlds | 2013

Virtual Museum with 3D Artifacts

Masaki Hayashi; Steven Bachelder; Clément Lefeuvre; Cyril Le Bras; Masayuki Nakajima

Summary form only given. We have been researching and developing Virtual Museum in an extreme high-definition real-time computer graphics system with a resolution of 4K and 8K (Super Hi-Vision). We had first developed a functioning test system, which exhibited Japanese artifacts Ukiyoe in 4K resolution. In this time, we have enhanced the virtual museum with a capability to exhibit 3D objects such as statues and archaeological items. In order to achieve this, the following technology is needed, 1) acquisition of 3D object with very high-resolution digitization for both geometry and texture data, 2) new representation technique to enable a user to walkthrough in the museum space and appreciate the artifacts with less stress. For 1), we have established the digitization pipeline converting a real object to the virtual object in the virtual space. And for 2), we have made a new walkthrough algorithm with a human gesture sensing device which enables a user to walkthrough by hand gesture. We have developed and demonstrated a prototype of the new virtual museum and have confirmed its functionality and usability.


Multimedia Tools and Applications | 2014

[Paper] T2V: New Technology of Converting Text to CG Animation

Masaki Hayashi; Seiki Inoue; Mamoru Douke; Narichika Hamaguchi; Hiroyuki Kaneko; Steven Bachelder; Masayuki Nakajima


Multimedia Tools and Applications | 2016

[Paper] Virtual Museum Equipped with Automatic Video Content Generator

Masaki Hayashi; Steven Bachelder; Masayuki Nakajima; Akihiko Iguchi


Multimedia Tools and Applications | 2013

[Paper] LEGO Builder: Automatic Generation of LEGO Assembly Manual from 3D Polygon Model

Sumiaki Ono; Alexis Andre; Youngha Chang; Masayuki Nakajima


CyberWorlds2013, Oct. 21-23, 2013 | 2013

Interactive TV by Text-To-Vision : Application Using TVML SDK on UNITY

Masaki Hayashi; Steven Bachelder; Matéo Grippon; Masayuki Nakajima

Collaboration


Dive into the Masayuki Nakajima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexis Andre

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xiaohua Zhang

Hiroshima Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge