Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Justin G. Chen is active.

Publication


Featured researches published by Justin G. Chen.


computer vision and pattern recognition | 2015

Visual vibrometry: Estimating material properties from small motions in video

Abe Davis; Katherine L. Bouman; Justin G. Chen; Michael Rubinstein; William T. Freeman

The estimation of material properties is important for scene understanding, with many applications in vision, robotics, and structural engineering. This paper connects fundamentals of vibration mechanics with computer vision techniques in order to infer material properties from small, often imperceptible motion in video. Objects tend to vibrate in a set of preferred modes. The shapes and frequencies of these modes depend on the structure and material properties of an object. Focusing on the case where geometry is known or fixed, we show how information about an objects modes of vibration can be extracted from video and used to make inferences about that objects material properties. We demonstrate our approach by estimating material properties for a variety of rods and fabrics by passively observing their motion in high-speed and regular framerate video.


Archive | 2014

Structural Modal Identification Through High Speed Camera Video: Motion Magnification

Justin G. Chen; Neal Wadhwa; Young-Jin Cha; William T. Freeman; Oral Buyukozturk

Video cameras offer the unique capability of collecting high density spatial data from a distant scene of interest. They could be employed as remote monitoring or inspection sensors because of their commonplace use, simplicity, and relatively low cost. The difficulty is in interpreting the video data into a usable format that is familiar to engineers such as displacement. A methodology called motion magnification, developed for visualizing exaggerated versions of small displacements, is extended to modal identification in structures. Experiments in a laboratory setting on a cantilever beam were performed to verify the method against accelerometer and laser vibrometer measurements. Motion magnification is used for modal analysis of cantilever beams to visualize mode shapes and calculate mode shape curvature as a basis for damage detection. Suggestions for applications of this methodology and challenges in real-world implementations are given.


Journal of Infrastructure Systems | 2017

Video Camera–Based Vibration Measurement for Civil Infrastructure Applications

Justin G. Chen; Abe Davis; Neal Wadhwa; William T. Freeman; Oral Buyukozturk

AbstractVisual testing, as one of the oldest methods for nondestructive testing (NDT), plays a large role in the inspection of civil infrastructure. As NDT has evolved, more quantitative techniques...


Communications of The ACM | 2016

Eulerian video magnification and analysis

Neal Wadhwa; Hao-Yu Wu; Abe Davis; Michael Rubinstein; Eugene Shih; Gautham J. Mysore; Justin G. Chen; Oral Buyukozturk; John V. Guttag; William T. Freeman

The world is filled with important, but visually subtle signals. A persons pulse, the breathing of an infant, the sag and sway of a bridge---these all create visual patterns, which are too difficult to see with the naked eye. We present Eulerian Video Magnification, a computational technique for visualizing subtle color and motion variations in ordinary videos by making the variations larger. It is a microscope for small changes that are hard or impossible for us to see by ourselves. In addition, these small changes can be quantitatively analyzed and used to recover sounds from vibrations in distant objects, characterize material properties, and remotely measure a persons pulse.


international conference on computer graphics and interactive techniques | 2015

Image-space modal bases for plausible manipulation of objects in video

Abe Davis; Justin G. Chen

We present algorithms for extracting an image-space representation of object structure from video and using it to synthesize physically plausible animations of objects responding to new, previously unseen forces. Our representation of structure is derived from an image-space analysis of modal object deformation: projections of an objects resonant modes are recovered from the temporal spectra of optical flow in a video, and used as a basis for the image-space simulation of object dynamics. We describe how to extract this basis from video, and show that it can be used to create physically-plausible animations of objects without any knowledge of scene geometry or material properties.


Cell Reports | 2014

The Extreme Anterior Domain Is an Essential Craniofacial Organizer Acting through Kinin-Kallikrein Signaling

Laura Jacox; Radek Sindelka; Justin G. Chen; Alyssa Rothman; Amanda J.G. Dickinson; Hazel Sive

The extreme anterior domain (EAD) is a conserved embryonic region that includes the presumptive mouth. We show that the Kinin-Kallikrein pathway is active in the EAD and necessary for craniofacial development in Xenopus and zebrafish. The mouth failed to form and neural crest (NC) development and migration was abnormal after loss of function (LOF) in the pathway genes kng, encoding Bradykinin (xBdk), carboxypeptidase-N (cpn), which cleaves Bradykinin, and neuronal nitric oxide synthase (nNOS). Consistent with a role for nitric oxide (NO) in face formation, endogenous NO levels declined after LOF in pathway genes, but these were restored and a normal face formed after medial implantation of xBdk-beads into LOF embryos. Facial transplants demonstrated that Cpn function from within the EAD is necessary for the migration of first arch cranial NC into the face and for promoting mouth opening. The study identifies the EAD as an essential craniofacial organizer acting through Kinin-Kallikrein signaling.


Applied Optics | 2011

Laser vibrometry from a moving ground vehicle.

Leaf A. Jiang; Marius A. Albota; Robert W. Haupt; Justin G. Chen; Richard M. Marino

We investigated the fundamental limits to the performance of a laser vibrometer that is mounted on a moving ground vehicle. The noise floor of a moving laser vibrometer consists of speckle noise, shot noise, and platform vibrations. We showed that speckle noise can be reduced by increasing the laser spot size and that the noise floor is dominated by shot noise at high frequencies (typically greater than a few kilohertz for our system). We built a five-channel, vehicle-mounted, 1.55 μm wavelength laser vibrometer to measure its noise floor at 10 m horizontal range while driving on dirt roads. The measured noise floor agreed with our theoretical estimates. We showed that, by subtracting the response of an accelerometer and an optical reference channel, we could reduce the excess noise (in units of micrometers per second per Hz(1/2)) from vehicle vibrations by a factor of up to 33, to obtain nearly speckle-and-shot-noise-limited performance from 0.3 to 47 kHz.


Archive | 2015

Developments with Motion Magnification for Structural Modal Identification Through Camera Video

Justin G. Chen; Neal Wadhwa; William T. Freeman; Oral Buyukozturk

Non-contact measurement of the response of vibrating structures may be achieved using several different methods including the use of video cameras that offer flexibility in use and advantage in terms of cost. Videos can provide valuable qualitative information to an informed person, but quantitative measurements obtained using computer vision techniques are essential for structural assessment. Motion Magnification in videos refers to a collection of techniques that amplify small motions in videos in specified bands of frequencies for visualization, which can also be used to determine displacements of distinct edges of structures being measured. We will present recent developments in motion magnification for the modal identification of structures. A new algorithm based on the Riesz transform has been developed allowing for real-time application of motion magnification to normal-speed videos with similar quality to the previous computationally intensive phase-based algorithm. Displacement signals are extracted from strong edges in the video as a basis for the data necessary for modal identification. Methodologies for output-only modal analysis applicable to the large number of signals and short length signals are demonstrated on example videos of vibrating structures.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2017

Visual Vibrometry: Estimating Material Properties from Small Motions in Video

Abe Davis; Katherine L. Bouman; Justin G. Chen; Michael Rubinstein; Oral Buyukozturk; William T. Freeman

The estimation of material properties is important for scene understanding, with many applications in vision, robotics, and structural engineering. This paper connects fundamentals of vibration mechanics with computer vision techniques in order to infer material properties from small, often imperceptible motions in video. Objects tend to vibrate in a set of preferred modes. The frequencies of these modes depend on the structure and material properties of an object. We show that by extracting these frequencies from video of a vibrating object, we can often make inferences about that objects material properties. We demonstrate our approach by estimating material properties for a variety of objects by observing their motion in high-speed and regular frame rate video.


Structural Health Monitoring-an International Journal | 2015

Motion Magnification Based Damage Detection Using High Speed Video

Young-Jin Cha; Justin G. Chen; Oral Buyukozturk

Structural system identification and damage detection is an important engineering challenge due to the increase in aging infrastructure in the United States. In order to identify a structural system or detect structural damage, measured responses of structures are used for structural identification and damage detection. Typically the acceleration response is measured, although displacement responses inherently have more information of structural dynamic behavior than acceleration or velocity. In this paper a displacement measurement methodology using high-speed video, previously proposed using the motion magnification algorithm and optical flow, is used as the input for a damage detection algorithm using an unscented Kalman filter. This noncontact displacement measurement methodology has advantages; it does not require a time consuming instrumentation process and does not add any additional mass to the structure. However, the methodology still needs improvement due to its higher noise level relative to traditional accelerometer and laser vibrometer measurements. In order to detect structural damage using displacements measured from high speed video, an unscented Kalman filter is used to simultaneously remove noise from the displacement measurement and identify the current stiffness and damping coefficient values of the structure assuming a known mass. To validate the damage detection method, a numerical state-space formulation is derived for the structural system. While traditional formulations for unscented Kalman filter based approaches require such information to predict structural parameters such as stiffness and damping, however this newly derived dynamic formation does not require external forcing information. Experimental tests are carried out to test the proposed method. Steel cantilever beams are tested with bolt-loosening of the boundary condition connection as a damage scenario. The experimental results show reasonable predictions of the stiffness and damping values compared to simple dynamic analysis calculations of the beam. doi: 10.12783/SHM2015/294

Collaboration


Dive into the Justin G. Chen's collaboration.

Top Co-Authors

Avatar

Oral Buyukozturk

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Neal Wadhwa

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Abe Davis

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Robert W. Haupt

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Rubinstein

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hazel Sive

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Reza Mohammadi Ghazi

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alyssa Rothman

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge