Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Neal Wadhwa is active.

Publication


Featured researches published by Neal Wadhwa.


international conference on computer graphics and interactive techniques | 2013

Phase-based video motion processing

Neal Wadhwa; Michael Rubinstein; William T. Freeman

We introduce a technique to manipulate small movements in videos based on an analysis of motion in complex-valued image pyramids. Phase variations of the coefficients of a complex-valued steerable pyramid over time correspond to motion, and can be temporally processed and amplified to reveal imperceptible motions, or attenuated to remove distracting changes. This processing does not involve the computation of optical flow, and in comparison to the previous Eulerian Video Magnification method it supports larger amplification factors and is significantly less sensitive to noise. These improved capabilities broaden the set of applications for motion processing in videos. We demonstrate the advantages of this approach on synthetic and natural video sequences, and explore applications in scientific analysis, visualization and video enhancement.


international conference on computer graphics and interactive techniques | 2014

The visual microphone: passive recovery of sound from video

Abe Davis; Michael Rubinstein; Neal Wadhwa; Gautham J. Mysore; William T. Freeman

When sound hits an object, it causes small vibrations of the objects surface. We show how, using only high-speed video of the object, we can extract those minute vibrations and partially recover the sound that produced them, allowing us to turn everyday objects---a glass of water, a potted plant, a box of tissues, or a bag of chips---into visual microphones. We recover sounds from high-speed footage of a variety of objects with different properties, and use both real and simulated data to examine some of the factors that affect our ability to visually recover sound. We evaluate the quality of recovered sounds using intelligibility and SNR metrics and provide input and recovered audio samples for direct comparison. We also explore how to leverage the rolling shutter in regular consumer cameras to recover audio from standard frame-rate videos, and use the spatial resolution of our method to visualize how sound-related vibrations vary over an objects surface, which we can use to recover the vibration modes of an object.


international conference on computational photography | 2014

Riesz pyramids for fast phase-based video magnification

Neal Wadhwa; Michael Rubinstein; Frederic Durand; William T. Freeman

We present a new compact image pyramid representation, the Riesz pyramid, that can be used for real-time phase-based motion magnification. Our new representation is less overcomplete than even the smallest two orientation, octave-bandwidth complex steerable pyramid, and can be implemented using compact, efficient linear filters in the spatial domain. Motion-magnified videos produced with this new representation are of comparable quality to those produced with the complex steerable pyramid. When used with phase-based video magnification, the Riesz pyramid phase-shifts image features along only their dominant orientation rather than every orientation like the complex steerable pyramid.


european conference on computer vision | 2014

Refraction Wiggles for Measuring Fluid Depth and Velocity from Video

Tianfan Xue; Michael Rubinstein; Neal Wadhwa; Anat Levin; William T. Freeman

We present principled algorithms for measuring the velocity and 3D location of refractive fluids, such as hot air or gas, from natural videos with textured backgrounds. Our main observation is that intensity variations related to movements of refractive fluid elements, as observed by one or more video cameras, are consistent over small space-time volumes. We call these intensity variations “refraction wiggles”, and use them as features for tracking and stereo fusion to recover the fluid motion and depth from video sequences. We give algorithms for 1) measuring the (2D, projected) motion of refractive fluids in monocular videos, and 2) recovering the 3D position of points on the fluid from stereo cameras. Unlike pixel intensities, wiggles can be extremely subtle and cannot be known with the same level of confidence for all pixels, depending on factors such as background texture and physical properties of the fluid. We thus carefully model uncertainty in our algorithms for robust estimation of fluid motion and depth. We show results on controlled sequences, synthetic simulations, and natural videos. Different from previous approaches for measuring refractive flow, our methods operate directly on videos captured with ordinary cameras, do not require auxiliary sensors, light sources or designed backgrounds, and can correctly detect the motion and location of refractive fluids even when they are invisible to the naked eye.


Archive | 2014

Structural Modal Identification Through High Speed Camera Video: Motion Magnification

Justin G. Chen; Neal Wadhwa; Young-Jin Cha; William T. Freeman; Oral Buyukozturk

Video cameras offer the unique capability of collecting high density spatial data from a distant scene of interest. They could be employed as remote monitoring or inspection sensors because of their commonplace use, simplicity, and relatively low cost. The difficulty is in interpreting the video data into a usable format that is familiar to engineers such as displacement. A methodology called motion magnification, developed for visualizing exaggerated versions of small displacements, is extended to modal identification in structures. Experiments in a laboratory setting on a cantilever beam were performed to verify the method against accelerometer and laser vibrometer measurements. Motion magnification is used for modal analysis of cantilever beams to visualize mode shapes and calculate mode shape curvature as a basis for damage detection. Suggestions for applications of this methodology and challenges in real-world implementations are given.


Journal of Infrastructure Systems | 2017

Video Camera–Based Vibration Measurement for Civil Infrastructure Applications

Justin G. Chen; Abe Davis; Neal Wadhwa; William T. Freeman; Oral Buyukozturk

AbstractVisual testing, as one of the oldest methods for nondestructive testing (NDT), plays a large role in the inspection of civil infrastructure. As NDT has evolved, more quantitative techniques...


Communications of The ACM | 2016

Eulerian video magnification and analysis

Neal Wadhwa; Hao-Yu Wu; Abe Davis; Michael Rubinstein; Eugene Shih; Gautham J. Mysore; Justin G. Chen; Oral Buyukozturk; John V. Guttag; William T. Freeman

The world is filled with important, but visually subtle signals. A persons pulse, the breathing of an infant, the sag and sway of a bridge---these all create visual patterns, which are too difficult to see with the naked eye. We present Eulerian Video Magnification, a computational technique for visualizing subtle color and motion variations in ordinary videos by making the variations larger. It is a microscope for small changes that are hard or impossible for us to see by ourselves. In addition, these small changes can be quantitatively analyzed and used to recover sounds from vibrations in distant objects, characterize material properties, and remotely measure a persons pulse.


Archive | 2015

Developments with Motion Magnification for Structural Modal Identification Through Camera Video

Justin G. Chen; Neal Wadhwa; William T. Freeman; Oral Buyukozturk

Non-contact measurement of the response of vibrating structures may be achieved using several different methods including the use of video cameras that offer flexibility in use and advantage in terms of cost. Videos can provide valuable qualitative information to an informed person, but quantitative measurements obtained using computer vision techniques are essential for structural assessment. Motion Magnification in videos refers to a collection of techniques that amplify small motions in videos in specified bands of frequencies for visualization, which can also be used to determine displacements of distinct edges of structures being measured. We will present recent developments in motion magnification for the modal identification of structures. A new algorithm based on the Riesz transform has been developed allowing for real-time application of motion magnification to normal-speed videos with similar quality to the previous computationally intensive phase-based algorithm. Displacement signals are extracted from strong edges in the video as a basis for the data necessary for modal identification. Methodologies for output-only modal analysis applicable to the large number of signals and short length signals are demonstrated on example videos of vibrating structures.


international conference on computer graphics and interactive techniques | 2015

Deviation magnification: revealing departures from ideal geometries

Neal Wadhwa; Tali Dekel; Donglai Wei; Frederic Durand; William T. Freeman

Structures and objects are often supposed to have idealized geometries such as straight lines or circles. Although not always visible to the naked eye, in reality, these objects deviate from their idealized models. Our goal is to reveal and visualize such subtle geometric deviations, which can contain useful, surprising information about our world. Our framework, termed Deviation Magnification, takes a still image as input, fits parametric models to objects of interest, computes the geometric deviations, and renders an output image in which the departures from ideal geometries are exaggerated. We demonstrate the correctness and usefulness of our method through quantitative evaluation on a synthetic dataset and by application to challenging natural images.


international conference on computer graphics and interactive techniques | 2016

Bilateral guided upsampling

Jiawen Chen; Andrew Adams; Neal Wadhwa; Samuel W. Hasinoff

We present an algorithm to accelerate a large class of image processing operators. Given a low-resolution reference input and output pair, we model the operator by fitting local curves that map the input to the output. We can then produce a full-resolution output by evaluating these low-resolution curves on the full-resolution input. We demonstrate that this faithfully models state-of-the-art operators for tone mapping, style transfer, and recoloring. The curves are computed by lifting the input into a bilateral grid and then solving for the 3D array of affine matrices that best maps input color to output color per x, y, intensity bin. We enforce a smoothness term on the matrices which prevents false edges and noise amplification. We can either globally optimize this energy, or quickly approximate a solution by locally fitting matrices and then enforcing smoothness by blurring in grid space. This latter option reduces to joint bilateral upsampling [Kopf et al. 2007] or the guided filter [He et al. 2013], depending on the choice of parameters. The cost of running the algorithm is reduced to the cost of running the original algorithm at greatly reduced resolution, as fitting the curves takes about 10 ms on mobile devices, and 1--2 ms on desktop CPUs, and evaluating the curves can be done with a simple GPU shader.

Collaboration


Dive into the Neal Wadhwa's collaboration.

Top Co-Authors

Avatar

Michael Rubinstein

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Frederic Durand

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Justin G. Chen

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Oral Buyukozturk

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abe Davis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hao-Yu Wu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tianfan Xue

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John V. Guttag

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Donglai Wei

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge