Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Wilson Nash is active.

Publication


Featured researches published by James Wilson Nash.


Proceedings of SPIE | 2012

Unassisted 3D camera calibration

Kalin Mitkov Atanassov; Vikas Ramachandra; James Wilson Nash; Sergio Goma

With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.


Proceedings of SPIE | 2014

Structured light 3D depth map enhancement and gesture recognition using image content adaptive filtering

Vikas Ramachandra; James Wilson Nash; Kalin Mitkov Atanassov; Sergio Goma

A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector that projects an illumination pattern on the scene (e.g. mask with vertical stripes) and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. In this paper, we use side information in the form of image structure to enhance the depth map. This side information is obtained from the received light pattern image reflected by the scene itself. The processing steps run real time. This post-processing stage in the form of depth map enhancement can be used for better hand gesture recognition, as is illustrated in this paper.


Proceedings of SPIE | 2013

Digital ruler: real-time object tracking and dimension measurement using stereo cameras

James Wilson Nash; Kalin Mitkov Atanassov; Sergio Goma; Vikas Ramachandra; Hasib Ahmed Siddiqui

Stereo metrology involves obtaining spatial estimates of an object’s length or perimeter using the disparity between boundary points. True 3D scene information is required to extract length measurements of an object’s projection onto the 2D image plane. In stereo vision the disparity measurement is highly sensitive to object distance, baseline distance, calibration errors, and relative movement of the left and right demarcation points between successive frames. Therefore a tracking filter is necessary to reduce position error and improve the accuracy of the length measurement to a useful level. A Cartesian coordinate extended Kalman (EKF) filter is designed based on the canonical equations of stereo vision. This filter represents a simple reference design that has not seen much exposure in the literature. A second filter formulated in a modified sensor-disparity (DS) coordinate system is also presented and shown to exhibit lower errors during a simulated experiment.


Proceedings of SPIE | 2013

Temporal image stacking for noise reduction and dynamic range improvement

Kalin Mitkov Atanassov; James Wilson Nash; Sergio Goma; Vikas Ramachandra; Hasib Ahmed Siddiqui

The dynamic range of an imager is determined by the ratio of the pixel well capacity to the noise floor. As the scene dynamic range becomes larger than the imager dynamic range, the choices are to saturate some parts of the scene or “bury” others in noise. In this paper we propose an algorithm that produces high dynamic range images by “stacking” sequentially captured frames which reduces the noise and creates additional bits. The frame stacking is done by frame alignment subject to a projective transform and temporal anisotropic diffusion. The noise sources contributing to the noise floor are the sensor heat noise, the quantization noise, and the sensor fixed pattern noise. We demonstrate that by stacking images the quantization and heat noise are reduced and the decrease is limited only by the fixed pattern noise. As the noise is reduced, the resulting cleaner image enables the use of adaptive tone mapping algorithms which render HDR images in an 8-bit container without significant noise increase.


Proceedings of SPIE | 2013

Self-calibration of depth sensing systems based on structured-light 3D

Vikas Ramachandra; James Wilson Nash; Kalin Mitkov Atanassov; Sergio Goma

A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector, that projects a light pattern on the scene (e.g. mask with vertical stripes), and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. For this setup to work optimally, the camera and projector must be aligned such that the projection image plane and the image capture plane are parallel, i.e. free of any relative rotations (yaw, pitch and roll). In reality, due to mechanical placement inaccuracy, the projector-camera pair will not be aligned. In this paper we present a calibration process which measures the misalignment. We also estimate a scale factor to account for differences in the focal lengths of the projector and the camera. The three angles of rotation can be found by introducing a plane in the field of view of the camera and illuminating it with the projected light patterns. An image of this plane is captured and processed to obtain the relative pitch, yaw and roll angles, as well as the scale through an iterative process. This algorithm leverages the effects of the misalignment/ rotation angles on the depth map of the plane image.


Archive | 2013

DESIGN OF CODE IN AFFINE-INVARIANT SPATIAL MASK

Kalin Mitkov Atanassov; James Wilson Nash; Vikas Ramachandra; Sergiu Radu Goma


Archive | 2013

Transmission of affine-invariant spatial mask for active depth sensing

Kalin Mitkov Atanassov; James Wilson Nash; Vikas Ramachandra; Sergiu Radu Goma


Archive | 2014

Local adaptive histogram equalization

Kalin Mitkov Atanassov; James Wilson Nash; Stephen Michael Verrall; Hasib Ahmed Siddiqui


Archive | 2013

SYSTEMS AND METHODS FOR MULTIVIEW METROLOGY

Kalin Mitkov Atanassov; Vikas Ramachandra; James Wilson Nash; Sergiu Radu Goma


Archive | 2013

Reception of Affine-Invariant Spatial Mask for Active Depth Sensing

Kalin Mitkov Atanassov; James Wilson Nash; Vikas Ramachandra; Sergiu Radu Goma

Researchain Logo
Decentralizing Knowledge