Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ko Nishino is active.

Publication


Featured researches published by Ko Nishino.


computer vision and pattern recognition | 2009

Anomaly detection in extremely crowded scenes using spatio-temporal motion pattern models

Louis Kratz; Ko Nishino

Extremely crowded scenes present unique challenges to video analysis that cannot be addressed with conventional approaches. We present a novel statistical framework for modeling the local spatio-temporal motion pattern behavior of extremely crowded scenes. Our key insight is to exploit the dense activity of the crowded scene by modeling the rich motion patterns in local areas, effectively capturing the underlying intrinsic structure they form in the video. In other words, we model the motion variation of local space-time volumes and their spatial-temporal statistical behaviors to characterize the overall behavior of the scene. We demonstrate that by capturing the steady-state motion behavior with these spatio-temporal motion pattern models, we can naturally detect unusual activity as statistical deviations. Our experiments show that local spatio-temporal motion pattern modeling offers promising results in real-world scenes with complex activities that are hard for even human observers to analyze.


International Journal of Computer Vision | 2012

Bayesian Defogging

Ko Nishino; Louis Kratz; Stephen Lombardi

Atmospheric conditions induced by suspended particles, such as fog and haze, severely alter the scene appearance. Restoring the true scene appearance from a single observation made in such bad weather conditions remains a challenging task due to the inherent ambiguity that arises in the image formation process. In this paper, we introduce a novel Bayesian probabilistic method that jointly estimates the scene albedo and depth from a single foggy image by fully leveraging their latent statistical structures. Our key idea is to model the image with a factorial Markov random field in which the scene albedo and depth are two statistically independent latent layers and to jointly estimate them. We show that we may exploit natural image and depth statistics as priors on these hidden layers and estimate the scene albedo and depth with a canonical expectation maximization algorithm with alternating minimization. We experimentally evaluate the effectiveness of our method on a number of synthetic and real foggy images. The results demonstrate that the method achieves accurate factorization even on challenging scenes for past methods that only constrain and estimate one of the latent variables.


international conference on computer vision | 2009

Factorizing Scene Albedo and Depth from a Single Foggy Image

Louis Kratz; Ko Nishino

Atmospheric conditions induced by suspended particles, such as fog and haze, severely degrade image quality. Restoring the true scene colors (clear day image) from a single image of a weather-degraded scene remains a challenging task due to the inherent ambiguity between scene albedo and depth. In this paper, we introduce a novel probabilistic method that fully leverages natural statistics of both the albedo and depth of the scene to resolve this ambiguity. Our key idea is to model the image with a factorial Markov random field in which the. scene albedo and depth are. two statistically independent latent layers. We. show that we may exploit natural image and depth statistics as priors on these hidden layers and factorize a single foggy image via a canonical Expectation Maximization algorithm with alternating minimization. Experimental results show that the proposed method achieves more accurate restoration compared to state-of-the-art methods that focus on only recovering scene albedo or depth individually.


asian conference on computer vision | 2008

Robust Simultaneous Registration of Multiple Range Images

Ko Nishino; Katsushi Ikeuchi

The registration problem of multiple range images is fundamentalfor many applications thatrely onprecise geometric models. We propose a robust registration method that can align multiple range images comprised of a large number of datapoints. The proposed method minimizes an error function that is constructed to be global against all range images, providing the ability to diffusively distribute errors instead of accumulating them. The minimization strategy is designed to be efficient and robust against outliers by using conjugate gradient search utilizing M-estimator. Also, for “better” point correspondence search, the laser reflectance strength is used as an additional attribute of each 3D data point. For robustness against data noise, the framework is designed not to use secondary information, i.e. surface normals, in its error metric. We describe the details of the proposed method, and present experimental results applying the proposed method to real data.


international conference on computer graphics and interactive techniques | 2004

Eyes for relighting

Ko Nishino; Shree K. Nayar

The combination of the cornea of an eye and a camera viewing the eye form a catadioptric (mirror + lens) imaging system with a very wide field of view. We present a detailed analysis of the characteristics of this corneal imaging system. Anatomical studies have shown that the shape of a normal cornea (without major defects) can be approximated with an ellipsoid of fixed eccentricity and size. Using this shape model, we can determine the geometric parameters of the corneal imaging system from the image. Then, an environment map of the scene with a large field of view can be computed from the image. The environment map represents the illumination of the scene with respect to the eye. This use of an eye as a natural light probe is advantageous in many relighting scenarios. For instance, it enables us to insert virtual objects into an image such that they appear consistent with the illumination of the scene. The eye is a particularly useful probe when relighting faces. It allows us to reconstruct the geometry of a face by simply waving a light source in front of the face. Finally, in the case of an already captured image, eyes could be the only direct means for obtaining illumination information. We show how illumination computed from eyes can be used to replace a face in an image with another one. We believe that the eye not only serves as a useful tool for relighting but also makes relighting possible in situations where current approaches are hard to use.


european conference on computer vision | 2008

Scale-Dependent/Invariant Local 3D Shape Descriptors for Fully Automatic Registration of Multiple Sets of Range Images

John Novatnack; Ko Nishino

Despite the ubiquitous use of range images in various computer vision applications, little has been investigated about the size variation of the local geometric structures captured in the range images. In this paper, we show that, through canonical geometric scale-space analysis, this geometric scale-variability embedded in a range image can be exploited as a rich source of discriminative information regarding the captured geometry. We extend previous work on geometric scale-space analysis of 3D models to analyze the scale-variability of a range image and to detect scale-dependent 3D features --- geometric features with their inherent scales. We derive novel local 3D shape descriptors that encode the local shape information within the inherent support region of each feature. We show that the resulting set of scale-dependent local shape descriptors can be used in an efficient hierarchical registration algorithm for aligning range images with the same global scale. We also show that local 3D shape descriptors invariant to the scale variation can be derived and used to align range images with significantly different global scales. Finally, we demonstrate that the scale-dependent/invariant local 3D shape descriptors can even be used to fully automatically register multiple sets of range images with varying global scales corresponding to multiple objects.


international conference on computer vision | 2001

Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis

Ko Nishino; Zhengyou Zhang; Katsushi Ikeuchi

A framework for photo-realistic view-dependent image synthesis of a shiny object from a sparse set of images and a geometric model is proposed. Each image is aligned with the 3D model and decomposed into two images with regards to the reflectance components based on the intensity variation of object surface points. The view-independent surface reflection (diffuse reflection) is stored as one texture map. The view-dependent reflection (specular reflection) images are used to recover the initial approximation of the illumination distribution, and then a two step numerical minimization algorithm utilizing a simplified Torrance-Sparrow reflection model is used to estimate the reflectance parameters and refine the illumination distribution. This provides a very compact representation of the data necessary to render synthetic images from arbitrary view points. We have conducted experiments with real objects to synthesize photorealistic view-dependent images within the proposed framework.


international conference on computer vision | 2007

Scale-Dependent 3D Geometric Features

John Novatnack; Ko Nishino

Three-dimensional geometric data play fundamental roles in many computer vision applications. However, their scale-dependent nature, i.e. the relative variation in the spatial extents of local geometric structures, is often overlooked. In this paper we present a comprehensive framework for exploiting this 3D geometric scale variability. Specifically, we focus on detecting scale-dependent geometric features on triangular mesh models of arbitrary topology. The key idea of our approach is to analyze the geometric scale variability of a given 3D model in the scale-space of a dense and regular 2D representation of its surface geometry encoded by the surface normals. We derive novel corner and edge detectors, as well as an automatic scale selection method, that acts upon this representation to detect salient geometric features and determine their intrinsic scales. We evaluate the effectiveness and robustness of our method on a number of models of different topology. The results show that the resulting scale-dependent geometric feature set provides a reliable basis for constructing a rich but concise representation of the geometric structure at hand.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2005

Light source position and reflectance estimation from a single view without the distant illumination assumption

Kenji Hara; Ko Nishino; K. lkeuchi

Several techniques have been developed for recovering reflectance properties of real surfaces under unknown illumination. However, in most cases, those techniques assume that the light sources are located at infinity, which cannot be applied safely to, for example, reflectance modeling of indoor environments. In this paper, we propose two types of methods to estimate the surface reflectance property of an object, as well as the position of a light source from a single view without the distant illumination assumption, thus relaxing the conditions in the previous methods. Given a real image and a 3D geometric model of an object with specular reflection as inputs, the first method estimates the light source position by fitting to the Lambertian diffuse component, while separating the specular and diffuse components by using an iterative relaxation scheme. Our second method extends that first method by using as input a specular component image, which is acquired by analyzing multiple polarization images taken from a single view, thus removing its constraints on the diffuse reflectance property. This method simultaneously recovers the reflectance properties and the light source positions by optimizing the linearity of a log-transformed Torrance-Sparrow model. By estimating the objects reflectance property and the light source position, we can freely generate synthetic images of the target object under arbitrary lighting conditions with not only source direction modification but also source-surface distance modification. Experimental results show the accuracy of our estimation framework.


computer vision and pattern recognition | 1999

Eigen-texture method: Appearance compression based on 3D model

Ko Nishino; Yoichi Sato; Katsushi Ikeuchi

Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. Extensive research on these two methods has been made in CV and CG communities. However, both methods still have several drawbacks when it comes to applying them to the mixed reality where we integrate such virtual images with real background images. To overcome these difficulties, we propose a new method which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface. The 3D model is generated from a sequence of range images. The Eigen-Texture method is practical because it does not require any detailed reflectance analysis of the object surface, and has great advantages due to the accurate 3D geometric models. This paper describes the method, and reports on its implementation.

Collaboration


Dive into the Ko Nishino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Sagawa

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Takamatsu

Nara Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge