Yuji Nakazawa
Kanagawa University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuji Nakazawa.
international conference on image processing | 1994
Yuji Nakazawa; Takashi Komatsu; T. Sekimori; Kiyoharu Aizawa
Towards the development of a super high definition image acquisition system, we have proposed an image-processing based approach, i.e. the introduction of image processing techniques into an imaging process. Imaging methods based on this approach can be classified into two main categories: a spatial integration imaging method and a temporal integration imaging method. With regard to the spatial integration imaging method, we have presented previously a method for acquiring an improved-resolution image by integrating multiple images taken simultaneously with multiple cameras with different pixel apertures. In addition to the spatial integration imaging method, aimed at a particular surveillance application, we construct a temporal integration imaging method. Experimental simulations demonstrate that temporal integration imaging is as promising as spatial integration imaging for high resolution imaging.<<ETX>>
international conference on image processing | 1996
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito
By restricting object tracking, we form a robust object-specified active contour model which adapts itself to the properties of the specified object, that is to say, the objective smoothly deformable line-feature. The proposed model allows us to track the objective deformable line-feature throughout the entire observed noisy moving image sequence automatically. Furthermore we apply it to the surveillance image processing task that suspended power-transmission wires swinging due to strong winds are automatically tracked in an observed outdoor noisy moving image sequence. The experimental results demonstrate that the proposed method is considerably robust to noise.
international conference on image processing | 1995
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito
As an approach towards improving spatial resolution of image acquisition, we have previously presented a prototype specific temporal integration imaging method which is tuned to a particular type of application where a user indicates a region of interest (ROI) on an observed image in advance and hence does not involve global image segmentation at all. The method uses a sub-pixel registration algorithm which describes image motion within the ROI with sub-pixel accuracy as deformation of quadrilateral patches covering the ROI and then performs a sub-pixel registration by warping an observed image with the warping functions recovered from the deformed quadrilateral patches. However, the size of quadrilateral patches depends on curvature of an objects surface, and hence it is extremely difficult to determine the proper size. To solve this problem, we introduce a hierarchical patch splitting algorithm for controlling spatial fineness of quadrilateral patches. The experimental simulations demonstrate that the temporal integration imaging method equipped with the capability of automatically controlling spatial fineness of quadrilateral patches works well for a smoothly curved surface the local geometrical structure of which can be well approximated as a rigid two-dimensional plane.
international conference on image processing | 1994
Yuji Nakazawa; Takahiro Saito
Towards the development of comprehensive analysis of data obtained by various CTs, we develop a region extraction method for recognizing brain anatomies straightforwardly from a given set of multislice MRI brain images of an individual by utilizing a standard brain atlas as the high-level topological information. The region extraction method utilizes a standard anatomical contour model of an intended brain anatomy derived from the standard brain atlas as an initial active contour model, and matches the active contour model to an input MRI brain image by means of energy minimization, where the energy functional is defined with the image intensity function, smoothness constraint of the active contour model, and the standard anatomical contour models. We apply the proposed region extraction method to recognition of the ventricle, and the results demonstrate that the active contour model is attracted to the desirable local-minimal contour and hence the method can achieve satisfactory recognition.<<ETX>>
international conference on image analysis and processing | 1995
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito
Towards the development of super high resolution image acquisition, we present a temporal integration imaging method. The image processing algorithm for a generic temporal integration imaging method consists of the three stages: segmentation, sub-pixel registration and reconstruction. The segmentation stage and the sub-pixel registration stage are interdependent and extremely difficult to construct completely. Instead, aiming at a particular type of application where a user indicates a region of interest (ROI) on an observed image, we construct a prototypal temporal integration imaging method which does not involve the segmentation stage at all. Moreover, we develop a new quadrilateral-based sub-pixel registration algorithm, the key idea of which is to cover a ROI with deformable quadrilateral patches whose spatial fineness is automatically changed in accord with the curvature of the objects surface and then to describe an image warp between two image frames as deformation of the quadrilateral patches and finally to perform a sub-pixel registration by warping an observed image to a temporally integrated image with the recovered warping function.
Time-Varying Image Processing and Moving Object Recognition, 4#R##N#Proceedings of the 5th International Workshop Florence, Italy, September 5–6, 1996 | 1997
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito
Publisher Summary One of the keys to new-generation digital image production applicable even to domestic uses is to construct simple methods for estimating the cameras motion, position, and orientation from a moving image sequence observed with a single domestic video camera. This chapter discusses a method for camera calibration along with an estimation of the focal length of the camera accurately by using four definite coplanar points as a cue. The practical computational algorithms for the cue-based method of camera calibration are composed of simple linear algebraic operations and arithmetic operations, and hence they work so well as to provide accurate estimates of the cameras motion, position, and orientation stably. Experimental simulations demonstrate that the cue-based camera calibration method works well for the digital moving image mixing task. The key to the accurate cue-based camera calibration is to accurately detect the feature points used as a cue in an input image. Sub-pixel accuracy is possibly required for the detection task. To do this, one should enhance the spatial resolution of the image region containing the feature points in advance.
international conference on image processing | 1996
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito
We present image processing algorithms for the generic temporal-integration video-enhancement approach based on the global segmentation representation of motion, and demonstrate their usefulness by experimental simulations. As a specific case, here we take up the interlaced-to-progressive scan conversion problem, form the interlaced-to-progressive transform along the line of the temporal integration, and then experimentally evaluate it. The experimental simulations demonstrate that the temporal-integration approach is very promising as a basic means of video enhancement.
digital processing applications | 1996
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito
We present image processing algorithms for the generic temporal integration video enhancement approach based on the global segmentation representation of the motion, and demonstrate their usefulness by experimental simulations. As a specific case, here we take up the interlace-to progressive scan conversion problem, from the interlaced-to-progressive transform along the line of the temporal integration and then experimentally evaluate it. The experimental simulations demonstrate that the temporal integration approach is very promising as a basic means of video enhancement.
Proceedings IWISP '96#R##N#4–7 November 1996, Manchester, United Kingdom | 1996
Yuji Nakazawa; Tcàiishi Kcmatsu; Takahiw Saito
Publisher Summary Recently, some research institutes have started studying digital production of a panoramic image sequence from an observed moving image sequence, construction of a virtual studio with the 3-D CG technology and so on, with the intent to establish the concept and the schema of the new generation digital image production technology. One of the keys to new-generation digital image production is to construct simple methods for estimating the cameras motion, position, and orientation from a moving image sequence observed with a single TV camera. For that purpose, a method is presented for camera calibration and estimation of focal length. The method utilizes four definite coplanar points—for example, four vertices of an A4 size paper—as a cue. Moreover, the cue-based method is applied to the digital image production task of making up a moving image sequence of a synthetic 3-D CG image sequence and a real moving image sequence taken with a TV camera. The cue-based method works well for the task.
international conference on image processing | 1996
Yuji Nakazawa; Takashi Komatsu; Takahiro Saito