Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun Miyazaki is active.

Publication


Featured researches published by Jun Miyazaki.


international conference on data engineering | 2005

BUILDING A SMART MEETING ROOM: FROM INFRASTRUCTURE TO THE VIDEO GAP (RESEARCH AND OPEN ISSUES)

Alejandro Jaimes; Jun Miyazaki

At FXPAL Japan we have built an (experimental) Smart Conference Room (SCR) that contains multiple cameras, microphones, displays, and capture devices. Based on our experience, in this paper we discuss research and open issues in constructing SCRs like the one built at FXPAL for the purpose of automatic content analysis. Our discussion is grounded on a novel conceptual meeting model that consists of physical (from layout to cameras), conceptual (meeting types, actors), sensory (audio-visual capture), and content (syntax and semantics) components. We also discuss storage, retrieval, and deployment issues.


advances in multimedia | 2004

Visual trigger templates for knowledge-based indexing

Alejandro Jaimes; Qinhui Wang; Noriji Kato; Hitoshi Ikeda; Jun Miyazaki

We present an application to create binary Visual Trigger Templates (VTT) for automatic video indexing. Our approach is based on the observation that videos captured with fixed cameras have specific structures that depend on world constraints. Our system allows a user to graphically represent such constraints to automatically recognize simple actions or events. VTTs are constructed by manually drawing rectangles to define trigger spaces: when elements (e.g., a hand, a face) move inside the trigger spaces defined by the user, actions are recognized. For example, a user can define a raise hand action by drawing two rectangles: one for the face and one for the hand. Our approach uses motion, skin, and face detection algorithms. We present experiments on the PETS-ICVS dataset and on our own dataset to demonstrate that our system constitutes a simple but powerful mechanism for meeting video indexing.


multimedia signal processing | 2004

Interactive visualization of multi-stream meeting videos based on automatic visual content analysis

Alejandro Jaimes; Naofumi Yoshida; Kazumasa Murai; Kazutaka Hirata; Jun Miyazaki

We present a new approach to segment and visualize informally captured multi-stream meeting videos. We process the visual content in each stream individually by analyzing the differences between frames in each sequence to find change areas. These results are combined with face detection to determine visual activity in each of the streams. We then combine the activity scores from multiple streams and automatically generate a 3D representation of the video. Our representation allows the user to obtain an at-a-glance view of the video at different granularities of activity, view multiple streams simultaneously, and select particular points in time for viewing. We present experiments that suggest that low-level visual analysis can be effective for finding highlights that can be used for browsing multi-stream meeting videos.


international conference on pattern recognition | 2006

Proposal of recordable pointer: Pointed position measurement by projecting interference concentric circle pattern with a pointing device

Yasuji Seko; Yoshinori Yamaguchi; Yasuyuki Saguchi; Jun Miyazaki; Hiroyasu Koshimizu

We propose a new pointing device that can measure pointed positions by processing the interference concentric circles projected with a pointing device. The pointing device has a donut-shaped lens that is designed so as both to make the laser source be two hypothetical sources for forming optical interference and to project the concentric circle pattern widely. Two image sensors set on the projected side capture small parts of the concentric circles, and its center coordinate that is a pointed position of the pointing device is calculated from two normal lines to the arcs of the circles. In practice, we succeeded in measuring the pointed position accurately by real-time processing of the widely projected concentric circle patterns. We demonstrated mouse cursor operation on a large screen with the pointing device and also used it as real-object-based user interface to show the related information of real objects by pointing them


symposium on applications and the internet | 2005

An Automatic and Immediate Metadata Extraction Method by Heterogeneous Sensors for Meeting Video Streams

Naofumi Yoshida; Jun Miyazaki

In this paper, we discuss on an automatic and immediate metadata extraction method by heterogeneous sensors for meeting video streams. The main feature of our method is immediate and automatic extraction of metadata by giving semantics to combinations of heterogeneous sensors for meeting video streams. By this method, we can extract metadata automatically by semantics for combinations of heterogeneous sensor data immediately the target videos are captured. In this paper we describe the feasibility of our method by several experimental results.


international conference on pattern recognition | 2006

Firefly capturing method: Motion capturing by monocular camera with large spherical aberration of lens and Hough-transform-based image processing

Yasuji Seko; Yasuyuki Saguchi; Hiroyuki Hotta; Jun Miyazaki; Hiroyasu Koshimizu

We demonstrate a new motion capturing method that uses the monocular camera with large spherical aberration of lens to measure 3D positions of point light sources attached on an object in real time without any sequential lighting. Point light sources are transformed into circle patterns by the large spherical aberration of lens mounted in the camera. The diameter and center position of circle pattern give the distance and direction to the light source, resulting in measuring its 3D position. Circle patterns are extracted by video image processing based on Hough transform even if they are overlapped each other. We tracked the circle patterns by predicting their next positions by Kalman filter that includes the acceleration of movement. By combining these processing techniques we succeeded in demonstrating the motion capturing of several LEDs in real time, which is shown in 3D graphics


ieee international conference on information visualization | 2003

CandyTop Interface: a visualization method with positive attention for growing multimedia documents

Naofumi Yoshida; Jun Miyazaki; Akira Wakita

We present CandyTop Interface, a visualization method with positive attention for growing multimedia documents. Our method is a visualization method for growing documents by allocating changes of multimedia documents according to time line in three-dimensional space. The main feature of our method is to enable provision of positive attention on key changes of multimedia documents by visualizing multimedia documents with priority of changes for multimedia documents. By our method, we can realize multimedia systems providing at-a-glance view for multiple versions of multimedia documents. We discuss the feasibility of our method by applications to visualization for growing documents with collaborative editing in virtual spaces and for multiview video streams of technical meeting.


symposium on applications and the internet | 2004

An automatic generation method of 3D visualization for holistic and detail relationships on e-learning environment

Naofumi Yoshida; Kazutaka Hirata; Jun Miyazaki

In this paper we discuss on an automatic generation method of 3D visualization for relationships of learning materials and its feasibility and effectiveness for e-learning environment. We have designed a visualization method for multimedia documents by showing both logical relationship and temporal relationship in three-dimensional space. By our visualization, it is expectable for educational effect by visualizing holistic and detail relationships of learning materials on e-learning environment. In this paper we present an automatic generation system of 3D interface and its feasibility and effectiveness by user study of the visualization method.


international conference on computer graphics and interactive techniques | 2003

Candytop: a Web3D interface to visualize growth of multimedia documents

Akira Wakita; Naofumi Yoshida; Jun Miyazaki; Hiroaki Chiyokura

Candytop is a web3D interface to visualize growth of multimedia documents along with a time line. Users can easily realize the relation among documents and catch up the context behind the projects. We use X3D VRML97 Profile for modeling and visualization.


IWDM '89 Proceedings of the Sixth International Workshop on Database Machines | 1989

A new version of a parallel production system machine, MANJI-II

Jun Miyazaki; Kenji Takeda; Hideharu Amano; Hideo Aiso

Parallel systems for OPS5 have been developed. In such systems, parallel implementations of the Rete algorithm are adopted because the number of pattern matchings is minimized in the original algorithm. However, conventional approaches for parallel Rete algorithms require special hardware to cope with the dynamic process allocation and frequent communication. Dedicated machines are necessary for such methods.

Collaboration


Dive into the Jun Miyazaki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge