Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Leo Miyashita is active.

Publication


Featured researches published by Leo Miyashita.


international conference on computer graphics and interactive techniques | 2016

ZoeMatrope: a system for physical material design

Leo Miyashita; Kota Ishihara; Yoshihiro Watanabe; Masatoshi Ishikawa

Reality is the most realistic representation. We introduce a material display called ZoeMatrope that can reproduce a variety of materials with high resolution, dynamic range and light field reproducibility by using compositing and animation principles used in a zoetrope and a thaumatrope. With ZoeMatrope, the quality of the material is equivalent to that of real objects and the range of expressible materials is diversified by overlaying a set of base materials in a linear combination. ZoeMatrope is also able to express spatially-varying materials, and even augmented materials such as materials with an alpha channel. In this paper, we propose a method for selecting the optimal material set and determining the weights of the linear combination to reproduce a wide range of target materials properly. We also demonstrate the effectiveness of this approach with the developed system and show the results for various materials.


international conference on computer graphics and interactive techniques | 2015

3D motion sensing of any object without prior knowledge

Leo Miyashita; Ryota Yonezawa; Yoshihiro Watanabe; Masatoshi Ishikawa

We propose a novel three-dimensional motion sensing method using lasers. Recently, object motion information is being used in various applications, and the types of targets that can be sensed continue to diversify. Nevertheless, conventional motion sensing systems have low universality because they require some devices to be mounted on the target, such as accelerometers and gyro sensors, or because they are based on cameras, which limits the types of targets that can be detected. Our method solves this problem and enables noncontact, high-speed, deterministic measurement of the velocity of a moving target without any prior knowledge about the target shape and texture, and can be applied to any unconstrained, unspecified target. These distinctive features are achieved by using a system consisting of a laser range finder, a laser Doppler velocimeter, and a beam controller, in addition to a robust 3D motion calculation method. The motion of the target is recovered from fragmentary physical information, such as the distance and speed of the target at the laser irradiation points. From the acquired laser information, our method can provide a numerically stable solution based on the generalized weighted Tikhonov regularization. Using this technique and a prototype system that we developed, we also demonstrated a number of applications, including motion capture, video game control, and 3D shape integration with everyday objects.


international solid-state circuits conference | 2017

4.9 A 1ms high-speed vision chip with 3D-stacked 140GOPS column-parallel PEs for spatio-temporal image processing

Tomohiro Yamazaki; Hironobu Katayama; Shuji Uehara; Atsushi Nose; Masatsugu Kobayashi; Sayaka Shida; Masaki Odahara; Kenichi Takamiya; Yasuaki Hisamatsu; Shizunori Matsumoto; Leo Miyashita; Yoshihiro Watanabe; Takashi Izawa; Yoshinori Muramatsu; Masatoshi Ishikawa

High-speed vision systems that combine high-frame-rate imaging and highly parallel signal processing enable instantaneous visual feedback to rapidly control machines over human-visual-recognition speeds. Such systems also enable a reduction in circuit scale by using a fast and simple algorithm optimized for high-frame-rate processing [1]. Previous studies on vision systems and chips [1–4] have yielded low imaging performance due to large matrix-based processing element (PE) parallelization [1–3], and low functionality of the limited-purpose column-parallel PE architecture [4], constraining vision-chip applications.


Optics Express | 2017

Robust 6-DOF motion sensing for an arbitrary rigid body by multi-view laser Doppler measurements

Yunpu Hu; Leo Miyashita; Yoshihiro Watanabe; Masatoshi Ishikawa

We propose a novel method for the robust, non-contact, and six degrees of freedom (6-DOF) motion sensing of an arbitrary rigid body using multi-view laser Doppler measurements. The proposed method reconstructs the 6-DOF motion from fragmentary velocities on the surface of the target. It is unique compared to conventional contact-less motion sensing methods since it is robust against lack-of-feature objects and environments. By discussing the formulation of motion reconstruction by fragmentary velocities, we show that at least three viewpoints are essential for 6-DOF motion reconstruction. Further, we claim that the condition number of the measurement matrix can be a measure of system accuracy, and numerical simulation is performed to find an appropriate system configuration. The proposed method was implemented using a laser Doppler velocimeter, a galvanometer scanner, and some mirrors. We introduce the methods for calibration, coordinate system selection, and the calculation pipeline, all of which contribute to the accuracy of the proposed system. For evaluation, the proposed system is compared with an off-line chessboard-tracking scheme of a 500 fps camera. Experiments of measuring six different motion patterns are demonstrated to show the robustness of the proposed method against different kinds of motion. We also conduct evaluations with different distances and velocities. The mean value error is less than 1.3 deg/s in rotation and 3.2 mm/s in translation, and is robust against changes in distance and velocity. For speed evaluation, the throughput of the proposed method is approximately 250 Hz and the latency is approximately 20 ms.


human factors in computing systems | 2018

GLATUI: Non-intrusive Augmentation of Motion-based Interactions Using a GLDV

Yunpu Hu; Leo Miyashita; Yoshihiro Watanabe; Masatoshi Ishikawa

We present GLATUI, a system that non-intrusively augments the physical interaction between humans and everyday objects with rich expressions extracted directly from the motion. The proposed system measures the velocities on the human skin or the surface of any object and augments them into a user interface without any pre-instrumentation or assumption on the appearance. Specifically, the acquisition of three basic motion elements, i.e., rigid-body motion, deformation, and vibration were investigated. These elements enables natural and high degrees of freedom interactions. In the proposed system, the three motion elements are simultaneously estimated from on-surface velocities, which are measured using a galvanoscopic scanning laser Doppler vibrometer. A method for the measurement of the rigid-body motion, deformation, and vibration is proposed. In addition, we demonstrate several applications of the proposed system, including an on-hand UI with hand gestures, a vibration-based object recognition application, and a conceptual tangible UI. The recognition rate and context awareness of the proposed method in these applications were also evaluated.


Sensors | 2018

Design and Performance of a 1 ms High-Speed Vision Chip with 3D-Stacked 140 GOPS Column-Parallel PEs †

Atsushi Nose; Tomohiro Yamazaki; Hironobu Katayama; Shuji Uehara; Masatsugu Kobayashi; Sayaka Shida; Masaki Odahara; Kenichi Takamiya; Shizunori Matsumoto; Leo Miyashita; Yoshihiro Watanabe; Takashi Izawa; Yoshinori Muramatsu; Yoshikazu Nitta; Masatoshi Ishikawa

We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.


international conference on computer graphics and interactive techniques | 2016

ZoeMatrope for realistic and augmented materials

Leo Miyashita; Kota Ishihara; Yoshihiro Watanabe; Masatoshi Ishikawa

Reality is the most realistic representation. We introduce a material display called ZoeMatrope that can reproduce a variety of materials with high resolution, high dynamic range, and high light field fidelity by using real objects and characteristics of the human vision system. ZoeMatrope can also create augmented materials such as a mixture of wood and clear glass, a material with an alpha channel, and a material that looks red when illuminated with a light source A but blue when illuminated with a light source B. In this paper, we give an outline of the ZoeMatrope system and show the results for various materials.


intelligent robots and systems | 2015

High-speed image rotator for blur-canceling roll camera

Leo Miyashita; Yoshihiro Watanabe; Masatoshi Ishikawa

We developed an optical high-speed image rotation controller and realized a high-speed roll camera that is able to cancel the rotational motion blur of a rotating target. This system is composed of a hollow motor, a Dove prism, and a high-speed camera and controls optical image rotation according to the target rotation by using high-speed image processing. This so-called optical lever formed of the Dove prism worked effectively also for a high-speed rotating target, and our prototype system shows that rotational motion blur was suppressed to 0.125 [°] at 1420 [r/min].


international conference on 3d vision | 2014

Rapid SVBRDF Measurement by Algebraic Solution Based on Adaptive Illumination

Leo Miyashita; Yoshihiro Watanabe; Masatoshi Ishikawa

In this paper, we propose an algebraic solution for rapid SVBRDF measurement. The algebraic approach requires only a few reflectance samples to obtain the parameters described by the physically based Cook - Torrance model. This solution, however, also involves constraints concerning light and the normal direction in the acquisition process. To meet these constraints, we developed a system that changes the illumination according to the target 3D shape at high speed. As a result, the proposed method provides BRDF parameters at each texel without optimization and over-sampling. We demonstrated rapid measurement with real objects that do not have uniform reflectance and confirmed the validity of this approach by comparison with conventional methods.


international conference on multimedia and expo | 2018

Portable Lumipen: Dynamic SAR in Your Hand

Leo Miyashita; Tomohiro Yamazaki; Kenji Uehara; Yoshihiro Watanabe; Masatoshi Ishikawa

Collaboration


Dive into the Leo Miyashita's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge