Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Naoki Asada is active.

Publication


Featured researches published by Naoki Asada.


international conference on computer vision | 2009

Dense 3D reconstruction method using a single pattern for fast moving object

Ryusuke Sagawa; Yuichi Ota; Yasushi Yagi; Ryo Furukawa; Naoki Asada; Hiroshi Kawasaki

Dense 3D reconstruction of extremely fast moving objects could contribute to various applications such as body structure analysis and accident avoidance and so on. The actual cases for scanning we assume are, for example, acquiring sequential shape at the moment when an object explodes, or observing fast rotating turbines blades. In this paper, we propose such a technique based on a one-shot scanning method that reconstructs 3D shape from a single image where dense and simple pattern are projected onto an object. To realize dense 3D reconstruction from a single image, there are several issues to be solved; e.g. instability derived from using multiple colors, and difficulty on detecting dense pattern because of influence of object color and texture compression. This paper describes the solutions of the issues by combining two methods, that is (1) an efficient line detection technique based on de Bruijn sequence and belief propagation, and (2) an extension of shape from intersections of lines method. As a result, a scanning system that can capture an object in fast motion has been actually developed by using a high-speed camera. In the experiments, the proposed method successfully captured the sequence of dense shapes of an exploding balloon, and a breaking ceramic dish at 300–1000 fps.


International Journal of Computer Vision | 1998

Edge and Depth from Focus

Naoki Asada; Hisanaga Fujiwara; Takashi Matsuyama

This paper proposes a novel method to obtain the reliable edge and depth information by integrating a set of multi-focus images, i.e., a sequence of images taken by systematically varying a camera parameter focus. In previous work on depth measurement using focusing or defocusing, the accuracy depends upon the size and location of local windows where the amount of blur is measured. In contrast, no windowing is needed in our method; the blur is evaluated from the intensity change along corresponding pixels in the multi-focus images. Such a blur analysis enables us not only to detect the edge points without using spatial differentiation but also to estimate the depth with high accuracy. In addition, the analysis result is stable because the proposed method involves integral computations such as summation and least-square model fitting. This paper first discusses the fundamental properties of multi-focus images based on a step edge model. Then, two algorithms are presented: edge detection using an accumulated defocus image which represents the spatial distribution of blur, and depth estimation using a spatio-focal image which represents the intensity distribution along focus axis. The experimental results demonstrate that the highly precise measurement has been achieved: 0.5 pixel position fluctuation in edge detection and 0.2% error at 2.4 m in depth estimation.


international conference on pattern recognition | 1996

Photometric calibration of zoom lens systems

Naoki Asada; Akira Amano; Masashi Baba

This paper describes a calibration method of zoom lens systems to eliminate the photometric distortion due to vignetting. A novel camera model including the variable cone is proposed to account for the marginal intensity reduction observed in images. Using the theoretical model, we first show that the serious distortion due to vignetting appears when using a zoom lens that has open aperture and long focal length. Then we present the calibration method to estimate the intensity reduction from the lens parameters of zoom and iris. The experimental results demonstrate the validity of our theoretical model and the effectiveness of the photometric calibration of zoom lens systems.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Analysis of photometric properties of occluding edges by the reversed projection blurring model

Naoki Asada; Hisanaga Fujiwara; Takashi Matsuyama

This paper analyzes photometric properties of occluding edges and proves that an object surface behind a nearer object is partially observable beyond the occluding edges. We first discuss a limitation of the image blurring model using the convolution, and then present an optical flux based blurring model named the reversed projection blurring (RPB) model. Unlike the multicomponent blurring model proposed by Nguyen et al., the RPB model enables us to explore the optical phenomena caused by a shift-variant point spread function that appears at a depth discontinuity. Using the RPB model, theoretical analysis of occluding edge properties are given and two characteristic phenomena are shown: (1) a blurred occluding edge produces the same brightness profiles as would be predicted for a surface edge on the occluding object when the occluded surface radiance is uniform and (2) a nonmonotonic brightness transition would be observed in blurred occluding edge profiles when the occluded object has a surface edge. Experimental results using real images have demonstrated the validity of the RPB model as well as the observability of the characteristic phenomena of blurred occluding edges.


international conference on computer graphics and interactive techniques | 2003

Shadow removal from a real picture

Masashi Baba; Naoki Asada

Since shadows in a real picture imply the geometric constraint between lights, objects and viewpoint, the augmented reality using the picture should include the consistent shadows with the real situation. This paper proposes a method to remove shadows from a real picture based on the RGB color space analysis and shows experimental results of actual shadow removal and virtual shadow addition.


international conference on computer graphics and interactive techniques | 2004

Shadow removal from a real image based on shadow density

Masashi Baba; Masayuki Mukunoki; Naoki Asada

Shadows are physical phenomena observed in most natural scenes. Since shadows and shades enhance the reality of images, many studies on shadowing and shading have been done for realistic image generation. Shadows, however, often poses difficulties when using real images in image synthesis, because shadows imply the geometric relationship between objects, light source, and viewpoint. This means that real images including shadows are used for image synthesis only in a limited situation where the lighting condition is consistent with that of the real images [Sato et al. 1999].


international conference on document analysis and recognition | 2003

Graph grammar based analysis system of complex table form document

Akira Amano; Naoki Asada

Structure analysis of table form document is importantbecause printed documents and also electronical documentsonly provide geometrical layout and lexical information explicitly.To handle these documents automatically, logicalstructure information is necessary. In this paper, we firstpropose a general representation of table form documentbased on XML, which contains both structure and layoutinformation. Next, we present structure analysis systembased on graph grammar which represents document structureknowledge. As the relation between adjacent fields intable form documents become two dimensional, two dimensionalnotation is necessary to denote structural knowledge.Therefore, we adopt two dimensional graph grammar to denotethem. By using grammar notation, we can easily modifyand keep consistency of it, as the rules are relatively simple.Another advantage of using grammar notation is that,it can be used for generating documents only from logicalstructure. Experimental results have shown that the systemsuccessfully analyzed several kinds of table forms.


Radiation Medicine | 2007

Optimal dose and injection duration (injection rate) of contrast material for depiction of hypervascular hepatocellular carcinomas by multidetector CT

Yumi Yanaga; Kazuo Awai; Yoshiharu Nakayama; Takeshi Nakaura; Yoshitaka Tamura; Yoshinori Funama; Masahito Aoyama; Naoki Asada; Yasuyuki Yamashita

PurposeThe aim of this study was to investigate the optimal dose and injection duration of contrast material (CM) for depicting hypervascular hepatocellular carcinomas (HCCs) during the hepatic arterial phase with multidetector row computed tomography (CT).Materials and methodsThe study population consisted of 71 patients with hypervascular HCCs. After unenhanced scans, the first (early arterial phase, or EAP), second (late arterial phase, or LAP), and third (equilibrium phase) scanning was started at 30, 43, and 180 s after injection of contrast material (CM). During a 33-s period, patients with a body weight ≤50 kg received 100 ml of non-ionic CM with an iodine concentration of 300 mg I/ml; patients whose body weight was >50 kg received 100 ml of CM with an iodine concentration of 370 mg I/ml. First, we measured enhancement in the abdominal aorta and tumor-to-liver contrast (TLC) during the EAP and LAP. Next, to investigate the relation between aortic enhancement and TLC during the LAP, two radiologists visually assessed the conspicuity of hypervascular HCCs during the LAP using a 3-point scale: grade 1, poor; grade 2, fair; grade 3, excellent. Finally, to examine the effect of the CM dose and injection duration on aortic enhancement during the EAP, we simulated aortic enhancement curves using test bolus data obtained for 10 HCC patients and the method of Fleischmann and Hittmair.ResultsA relatively strong correlation was observed between aortic enhancement during the EAP and TLC during the LAP (correlation coefficient r = 0.75, P < 0.001). The 95% confidence intervals for the population mean for aortic enhancement during EAP in patients with tumor conspicuity grades of 1, 2, and 3 were 188.5, 222.4; 228.8, 259.3; and 280.2, 322.5 HU (Hounsfield Unit), respectively. Thus, we considered the lower limit of the aortic enhancement value for excellent depiction of HCCs during EAP to be 280 HU. To achieve an aortic enhancement value of >280 HU for aortic enhancement simulations during EAP, the injection duration should be <25 s for patients receiving a CM dose of 1.7 ml/kg with 300 mg I/ml iodine and <30 s for those receiving 2.0 ml/kg.ConclusionsFor excellent depiction of hypervascular HCCs during the hepatic arterial phase, the injection duration should be <25 s in patients receiving a CM dose of 1.7 ml/kg with 300 mg I/ml iodine and <30 s for patients receiving 2.0 ml/kg.


pacific-rim symposium on image and video technology | 2010

One-shot Entire Shape Acquisition Method Using Multiple Projectors and Cameras

Ryo Furukawa; Ryusuke Sagawa; Hiroshi Kawasaki; Kazuhiro Sakashita; Yasushi Yagi; Naoki Asada

In this paper, we propose an active scanning system using multiple projectors and cameras to acquire a dense entire shape of the object with a single scan (a.k.a. ones hot scan). One of the potential application of the system is to capture a moving object with high frame-rate. Since the pattern used for ones hot scan is usually complicated and those patterns interfere each other if they are projected on the same object, it is difficult to use multiple sets of them for entire shape acquisition. In addition, at the end of the closed loop, errors on each scan are accumulated, resulting in large gaps between shapes. To solve the problem, we propose a ones hot shape reconstruction method using a projector projecting a static pattern of parallel lines with one or two colors. Since each projector projects just parallel lines with a small number of colors, those patterns are easily decomposed and detected even if those patterns are projected multiple times on the same object. We also propose a kind of multi-view reconstruction algorithm for the proposed projector-camera system. In the experiment, we actually built a system which consists of six projectors and six cameras and dense shapes of entire objects were successfully reconstructed.


Journal of Magnetic Resonance Imaging | 2005

A semiautomated technique for evaluation of uterine peristalsis.

Aki Kido; Masahide Nishiura; Kaori Togashi; Asako Nakai; Toshitaka Fujiwara; Milliam L. Kataoka; Takashi Koyama; Shingo Fujii; Naoki Asada

To evaluate the capability of a newly developed semiautomatic analysis technique for evaluation of uterine peristalsis in comparison with visual assessment.

Collaboration


Dive into the Naoki Asada's collaboration.

Top Co-Authors

Avatar

Masashi Baba

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar

Ryo Furukawa

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shinsaku Hiura

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar

Akira Amano

Ritsumeikan University

View shared research outputs
Top Co-Authors

Avatar

Masahito Aoyama

Hiroshima City University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Sagawa

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ismael Daribo

National Institute of Informatics

View shared research outputs
Researchain Logo
Decentralizing Knowledge