Kenichiro Tanaka
Osaka University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenichiro Tanaka.
Applied Surface Science | 1990
Koji Sumitomo; Kenichiro Tanaka; Yosuke Izawa; Itsuo Katayama; Fumiya Shoji; Kenjiro Oura; Teruo Hanawa
Abstract Using the technique of time-of-flight mode impact-collision ion-scattering-spectroscopy, we have studied the structure of two kinds of Ag thin films deposited onto Si(111)7×7 substrates. We have shown that (1) the RT deposited Ag thin film of 10 ML thickness consists of type-A and type-B domains of Ag(111), with type-B being rotated 180° about the surface normal, (2) the experimental ICISS angle scans for the Si(111)√3×√3R30°-Ag surface cannot be explained by the embedded-Ag model proposed so far, (3) the Ag honeycomb structure residing on top of Si is unlikely for the √3 × √3-Ag surface, and (4) both the Ag-trimer and the HCT model, residing on top of Si are more likely, though a clear preference between these two models has not been obtained as yet.
international conference on computational photography | 2013
Kenichiro Tanaka; Yasuhiro Mukaigawa; Yasuyuki Matsushita; Yasushi Yagi
The inner structures of an object can be measured by capturing transmissive images. However, the recorded images of a translucent object tend to be unclear due to strong scattering of light inside the object. In this paper, we propose a descattering approach based on Parallel High-frequency Illumination. We show in this paper that the original high-frequency illumination method and the various extended techniques can be uniformly defined as a separation of overlapped and non-overlapped light rays. Also, we show that transmissive light rays do not overlap each other by constructing a parallel projection/measurement system for performing both illumination and observation. We have developed a measurement system that consists of a camera and projector with telecentric lenses and have evaluated descattering effects by extracting transmissive light rays.
international conference on computational photography | 2016
Hajime Mihara; Takuya Funatomi; Kenichiro Tanaka; Hiroyuki Kubo; Yasuhiro Mukaigawa; Hajime Nagahara
In this paper, we describe a supervised four-dimensional (4D) light field segmentation method that uses a graph-cut algorithm. Since 4D light field data has implicit depth information and contains redundancy, it differs from simple 4D hyper-volume. In order to preserve redundancy, we define two neighboring ray types (spatial and angular) in light field data. To obtain higher segmentation accuracy, we also design a learning-based likelihood, called objectness, which utilizes appearance and disparity cues. We show the effectiveness of our method via numerical evaluation and some light field editing applications using both synthetic and real-world light fields.
computer vision and pattern recognition | 2015
Kenichiro Tanaka; Yasuhiro Mukaigawa; Hiroyuki Kubo; Yasuyuki Matsushita; Yasushi Yagi
This paper describes a method for recovering appearance of inner slices of translucent objects. The outer appearance of translucent objects is a summation of the appearance of slices at all depths, where each slice is blurred by depth-dependent point spread functions (PSFs). By exploiting the difference of low-pass characteristics of depth-dependent PSFs, we develop a multi-frequency illumination method for obtaining the appearance of individual inner slices using a coaxial projector-camera setup. Specifically, by measuring the target object with varying the spatial frequency of checker patterns emitted from a projector, our method recovers inner slices via a simple linear solution method. We quantitatively evaluate accuracy of the proposed method by simulations and show qualitative recovery results using real-world scenes.
computer vision and pattern recognition | 2016
Kenichiro Tanaka; Yasuhiro Mukaigawa; Hiroyuki Kubo; Yasuyuki Matsushita; Yasushi Yagi
This paper presents a method for recovering shape and normal of a transparent object from a single viewpoint using a Time-of-Flight (ToF) camera. Our method is built upon the fact that the speed of light varies with the refractive index of the medium and therefore the depth measurement of a transparent object with a ToF camera may be distorted. We show that, from this ToF distortion, the refractive light path can be uniquely determined by estimating a single parameter. We estimate this parameter by introducing a surface normal consistency between the one determined by a light path candidate and the other computed from the corresponding shape. The proposed method is evaluated by both simulation and real-world experiments and shows faithful transparent shape recovery.
Surface Science | 1991
Koji Sumitomo; Kenichiro Tanaka; Itsuo Katayama; Fumiya Shoji; Kenjiro Oura
Abstract The surface damage formed on a clean Si(100)-2 × 1 surface by low-energy Ar ion bombardment has been studied in situ using the impact collision ion scattering spectroscopy (ICISS) method and the time-of-flight (TOF) technique. The Ar ion energy was 1 keV and the ion doses were in the range of 10 14 -10 15 ions/cm 2 . Channeling and focusing effects were observed very clearly on a clean and well-ordered surface. With the increase of the Ar ion dose, such channeling and focusing effects gradually disappear. The annealing process of the surface damage was also investigated.
computer vision and pattern recognition | 2017
Kenichiro Tanaka; Yasuhiro Mukaigawa; Takuya Funatomi; Hiroyuki Kubo; Yasuyuki Matsushita; Yasushi Yagi
This paper presents a material classification method using an off-the-shelf Time-of-Flight (ToF) camera. We use a key observation that the depth measurement by a ToF camera is distorted in objects with certain materials, especially with translucent materials. We show that this distortion is caused by the variations of time domain impulse responses across materials and also by the measurement mechanism of the existing ToF cameras. Specifically, we reveal that the amount of distortion varies according to the modulation frequency of the ToF camera, the material of the object, and the distance between the camera and object. Our method uses the depth distortion of ToF measurements as features and achieves material classification of a scene. Effectiveness of the proposed method is demonstrated by numerical evaluation and real-world experiments, showing its capability of even classifying visually similar objects.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017
Kenichiro Tanaka; Yasuhiro Mukaigawa; Hiroyuki Kubo; Yasuyuki Matsushita; Yasushi Yagi
This paper describes a method for recovering appearance of inner slices of translucent objects. The appearance of a layered translucent object is the summed appearance of all layers, where each layer is blurred by a depth-dependent point spread function (PSF). By exploiting the difference of low-pass characteristics of depth-dependent PSFs, we develop a multi-frequency illumination method for obtaining the appearance of individual inner slices. Specifically, by observing the target object with varying the spatial frequency of checker-pattern illumination, our method recovers the appearance of inner slices via computation. We study the effect of non-uniform transmission due to inhomogeneity of translucent objects and develop a method for recovering clear inner slices based on the pixel-wise PSF estimates under the assumption of spatial smoothness of inner slice appearances. We quantitatively evaluate the accuracy of the proposed method by simulations and qualitatively show faithful recovery using real-world scenes.
Ipsj Transactions on Computer Vision and Applications | 2018
Tsuyoshi Takatani; Koki Fujita; Kenichiro Tanaka; Takuya Funatomi; Yasuhiro Mukaigawa
UV printer, a digital fabrication tool, can print 2D patterns on 3D objects consisting of various materials by using UV inks which immediately dry through being exposed to ultraviolet light. In general use, the translucency of the materials is removed by printing a matte white layer. On the other hand, we propose a method to control the translucency of a printed object by rather utilizing both of the translucency of the materials and inks without printing the white layer. A key is to fuse two different manners: example based and physics based. We apply Kubelka’s layer model with few measurements to render the translucency and then build a lookup table between a combination of factors in print and the fabricated translucency. The lookup table is used for finding the best combination to represent an input query about translucency. The rendering method is quantitatively evaluated, and in experiments, we show the proposed system can control the translucency through replicating appearance.
Ipsj Transactions on Computer Vision and Applications | 2017
Kazuya Kitano; Takanori Okamoto; Kenichiro Tanaka; Takahito Aoto; Hiroyuki Kubo; Takuya Funatomi; Yasuhiro Mukaigawa
Recovering temporal point spread functions (PSFs) is important for various applications, especially analyzing light transport. Some methods that use amplitude-modulated continuous wave time-of-flight (ToF) cameras are proposed to recover temporal PSFs, where the resolution is several nanoseconds. Contrarily, we show in this paper that sub-nanosecond resolution can be achieved using pulsed ToF cameras and an additional circuit. A circuit is inserted before the illumination so that the emission delay can be controlled by sub-nanoseconds. From the observations of various delay settings, we recover temporal PSFs of the sub-nanosecond resolution. We confirm the effectiveness of our method via real-world experiments.