Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hajime Nagahara is active.

Publication


Featured researches published by Hajime Nagahara.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Flexible Depth of Field Photography

Sujit Kuthirummal; Hajime Nagahara; Changyin Zhou; Shree K. Nayar

The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, todays cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.


Pattern Recognition | 2014

The largest inertial sensor-based gait database and performance evaluation of gait-based personal authentication

Thanh Trung Ngo; Yasushi Makihara; Hajime Nagahara; Yasuhiro Mukaigawa; Yasushi Yagi

This paper presents the largest inertial sensor-based gait database in the world, which is made open to the research community, and its application to a statistically reliable performance evaluation for gait-based personal authentication. We construct several datasets for both accelerometer and gyroscope of three inertial measurement units and a smartphone around the waist of a subject, which include at most 744 subjects (389 males and 355 females) with ages ranging from 2 to 78 years. The database has several advantages: a large number of subjects with a balanced gender ratio, variations of sensor types, sensor locations, and ground slope conditions. Therefore, we can reliably analyze the dependence of gait authentication performance on a number of factors such as gender, age group, sensor type, ground condition, and sensor location. The results with the latest existing authentication methods provide several insights for these factors. HighlightsWe present the world largest inertial sensor-based database to the community.Based on the database, females have a better recognition performance than males.People have the best recognition performance at their twenties.An accelerometer has a better recognition performance than a gyroscope.


european conference on computer vision | 2010

Programmable aperture camera using LCoS

Hajime Nagahara; Changyin Zhou; Takuya Watanabe; Hiroshi Ishiguro; Shree K. Nayar

Since 1960s, aperture patterns have been studied extensively and a variety of coded apertures have been proposed for various applications, including extended depth of field, defocus deblurring, depth from defocus, light field acquisition, etc. Researches have shown that optimal aperture patterns can be quite different due to different applications, imaging conditions, or scene contents. In addition, many coded aperture techniques require aperture patterns to be temporally changed during capturing. As a result, it is often necessary to have a programmable aperture camera whose aperture pattern can be dynamically changed as needed in order to capture more useful information. In this paper, we propose a programmable aperture camera using a Liquid Crystal on Silicon (LCoS) device. This design affords a high brightness contrast and high resolution aperture with a relatively low light loss, and enables one change the pattern at a reasonably high frame rate. We build a prototype camera and evaluate its features and drawbacks comprehensively by experiments. We also demonstrate two coded aperture applications in light field acquisition and defocus deblurring.


computer vision and pattern recognition | 2012

Evaluation report of integrated background modeling based on spatio-temporal features

Yosuke Nonaka; Atsushi Shimada; Hajime Nagahara; Rin-ichiro Taniguchi

We report evaluation results of an integrated background modeling based on spatio-temporal features. The background modeling method consists of three complementary approaches: pixel-level background modeling, region-level one and frame-level one. The pixel-level background model uses the probability density function to approximate background model. The PDF is estimated non-parametrically by using Parzen density estimation. The region-level model is based on the evaluation of the local texture around each pixel while reducing the effects of variations in lighting. The frame-level model detects sudden, global changes of the the image brightness and estimates a present background image from input image referring to a background model image. Then, objects are extracted by background subtraction. Fusing these approaches realizes robust object detection under varying illumination.


Pattern Recognition | 2015

Similar gait action recognition using an inertial sensor

Trung Thanh Ngo; Yasushi Makihara; Hajime Nagahara; Yasuhiro Mukaigawa; Yasushi Yagi

This paper tackles a challenging problem of inertial sensor-based recognition for similar gait action classes (such as walking on flat ground, up/down stairs, and up/down a slope). We solve three drawbacks of existing methods in the case of gait actions: the action signal segmentation, the sensor orientation inconsistency, and the recognition of similar action classes. First, to robustly segment the walking action under drastic changes in various factors such as speed, intensity, style, and sensor orientation of different participants, we rely on the likelihood of heel strike computed employing a scale-space technique. Second, to solve the problem of 3D sensor orientation inconsistency when matching the signals captured at different sensor orientations, we correct the sensor?s tilt before applying an orientation-compensative matching algorithm to solve the remaining angle. Third, to accurately classify similar actions, we incorporate the interclass relationship in the feature vector for recognition. In experiments, the proposed algorithms were positively validated with 460 participants (the largest number in the research field), and five similar gait action classes (namely walking on flat ground, up/down stairs, and up/down a slope) captured by three inertial sensors at different positions (center, left, and right) and orientations on the participant?s waist. HighlightsAn action recognition algorithm for similar gait actions using an inertial sensor.A robust period segmentation based on the likelihood of an heel-strike is presented.The proposed method works well against a variation of sensor orientation.Interclass relationship improves the recognition accuracy significantly.The accuracy is more than 91% for a very large database of 460 subjects.


computer vision and pattern recognition | 2013

Light Field Distortion Feature for Transparent Object Recognition

Kazuki Maeno; Hajime Nagahara; Atsushi Shimada; Rin-ichiro Taniguchi

Current object-recognition algorithms use local features, such as scale-invariant feature transform (SIFT) and speeded-up robust features (SURF), for visually learning to recognize objects. These approaches though cannot apply to transparent objects made of glass or plastic, as such objects take on the visual features of background objects, and the appearance of such objects dramatically varies with changes in scene background. Indeed, in transmitting light, transparent objects have the unique characteristic of distorting the background by refraction. In this paper, we use a single-shot light field image as an input and model the distortion of the light field caused by the refractive property of a transparent object. We propose a new feature, called the light field distortion (LFD) feature, for identifying a transparent object. The proposal incorporates this LFD feature into the bag-of-features approach for recognizing transparent objects. We evaluated its performance in laboratory and real settings.


computer vision and pattern recognition | 2003

Wide Field of View Head Mounted Display for Tele-presence with An Omnidirectional Image Sensor

Hajime Nagahara; Yasushi Yagi; Masahiko Yachida

Recently, the omnidirectional image sensors have been applied to tele-presence systems, because the sensor can capture images with large field of views at video rate. On the other hand, head mount display (HMD) has been generally used as a personal display for virtual reality applications such as a tele-presence. However, almost all HMDs have a problem that the field of view (FOV), about 60 degree horizontally, of its presented image was terribly narrower than that of human. The problem makes reality and immersion lower in these applications. In this paper, we propose high-immersive visualization system that can display 180 degrees horizontal view by using a new catadioptrical HMD and an omnidirectional image sensor. The HMD consists of ellipsoidal and hyperboloidal curved mirrors, and can display 180 degrees horizontal view.


International Journal of Central Banking | 2011

Phase registration in a gallery improving gait authentication

Ngo Thanh Trung; Yasushi Makihara; Hajime Nagahara; Ryusuke Sagawa; Yasuhiro Mukaigawa; Yasushi Yagi

In this paper, we propose a method of inertial sensor-based gait authentication by inter-period phase registration of an owners gallery. In spite of the importance for gait authentication of constructing a gallery of phase-registered gait patterns, previous implementations just relied on simple methods of period detection based on heuristic knowledge such as local peaks/valleys or local auto-correlation of the gait signals. Consequently, we propose to improve a gait gallery by incorporating a phase registration technique which globally optimizes inter-period phase consistency in an energy minimization framework. However, the previous phase registration technique suffers from a phase distortion problem due to ambiguities in the combination of a periodic signal function and a phase evolution function. We present a linear phase evolution prior to constructing an undistorted gait signal for better matching performance. Experiments using real gait signals from 32 subjects show that the proposed methods outperform the latest methods in the field.


international conference on industrial electronics control and instrumentation | 2000

Super-resolution from an omnidirectional image sequence

Hajime Nagahara; Yasushi Yagi; Masahiko Yachida

The authors describe an omnidirectional image sensor, called HyperOmniVision, that can observe a 360 degree field of view and can transformed an input image to a common camera image or a panoramic image easily. However, it has an intrinsic problem where the angular resolution of HyperOmniVision is lower than that of a conventional video camera. The authors propose a super-resolution method for HyperOmniVision, which fuses consecutive images obtained the by turning motion of HyperOmniVision. They discuss the optimization of both optics and processing by simulation results.


Computer Vision and Image Understanding | 2014

Object detection based on spatiotemporal background models

Satoshi Yoshinaga; Atsushi Shimada; Hajime Nagahara; Rin-ichiro Taniguchi

Abstract We present a robust background model for object detection and its performance evaluation using the database of the Background Models Challenge (BMC). Background models should detect foreground objects robustly against background changes, such as “illumination changes” and “dynamic changes”. In this paper, we propose two types of spatiotemporal background modeling frameworks that can adapt to illumination and dynamic changes in the background. Spatial information can be used to absorb the effects of illumination changes because they affect not only a target pixel but also its neighboring pixels. Additionally, temporal information is useful in handling the dynamic changes, which are observed repeatedly. To establish the spatiotemporal background model, our frameworks model an illumination invariant feature and a similarity of intensity changes among a set of pixels according to statistical models, respectively. Experimental results obtained for the BMC database show that our models can detect foreground objects robustly against background changes.

Collaboration


Dive into the Hajime Nagahara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yasuhiro Mukaigawa

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryusuke Sagawa

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge