Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Taeg Sang Cho is active.

Publication


Featured researches published by Taeg Sang Cho.


international conference on computer graphics and interactive techniques | 2008

Motion-invariant photography

Anat Levin; Peter Sand; Taeg Sang Cho; William T. Freeman

Object motion during camera exposure often leads to noticeable blurring artifacts. Proper elimination of this blur is challenging because the blur kernel is unknown, varies over the image as a function of object velocity, and destroys high frequencies. In the case of motions along a 1D direction (e.g. horizontal) we show that these challenges can be addressed using a camera that moves during the exposure. Through the analysis of motion blur as space-time integration, we show that a parabolic integration (corresponding to constant sensor acceleration) leads to motion blur that is invariant to object velocity. Thus, a single deconvolution kernel can be used to remove blur and create sharp images of scenes with objects moving at different speeds, without requiring any segmentation and without knowledge of the object speeds. Apart from motion invariance, we prove that the derived parabolic motion preserves image frequency content nearly optimally. That is, while static objects are degraded relative to their image from a static camera, a reliable reconstruction of all moving objects within a given velocities range is made possible. We have built a prototype camera and present successful deblurring results over a wide variety of human motions.


computer vision and pattern recognition | 2008

The patch transform and its applications to image editing

Taeg Sang Cho; Moshe Butman; Shai Avidan; William T. Freeman

We introduce the patch transform, where an image is broken into non-overlapping patches, and modifications or constraints are applied in the ldquopatch domainrdquo. A modified image is then reconstructed from the patches, subject to those constraints. When no constraints are given, the reconstruction problem reduces to solving a jigsaw puzzle. Constraints the user may specify include the spatial locations of patches, the size of the output image, or the pool of patches from which an image is reconstructed. We define terms in a Markov network to specify a good image reconstruction from patches: neighboring patches must fit to form a plausible image, and each patch should be used only once. We find an approximate solution to the Markov network using loopy belief propagation, introducing an approximation to handle the combinatorially difficult patch exclusion constraint. The resulting image reconstructions show the original image, modified to respect the userpsilas changes. We apply the patch transform to various image editing tasks and show that the algorithm performs well on real world images.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Image Restoration by Matching Gradient Distributions

Taeg Sang Cho; Charles Lawrence Zitnick; Neel Joshi; Sing Bing Kang; Richard Szeliski; William T. Freeman

The restoration of a blurry or noisy image is commonly performed with a MAP estimator, which maximizes a posterior probability to reconstruct a clean image from a degraded image. A MAP estimator, when used with a sparse gradient image prior, reconstructs piecewise smooth images and typically removes textures that are important for visual realism. We present an alternative deconvolution method called iterative distribution reweighting (IDR) which imposes a global constraint on gradients so that a reconstructed image should have a gradient distribution similar to a reference distribution. In natural images, a reference distribution not only varies from one image to another, but also within an image depending on texture. We estimate a reference distribution directly from an input image for each texture segment. Our algorithm is able to restore rich mid-frequency textures. A large-scale user study supports the conclusion that our algorithm improves the visual realism of reconstructed images compared to those of MAP estimators.


computer vision and pattern recognition | 2011

Blur kernel estimation using the radon transform

Taeg Sang Cho; Sylvain Paris; Berthold K. P. Horn; William T. Freeman

Camera shake is a common source of degradation in photographs. Restoring blurred pictures is challenging because both the blur kernel and the sharp image are unknown, which makes this problem severely underconstrained. In this work, we estimate camera shake by analyzing edges in the image, effectively constructing the Radon transform of the kernel. Building upon this result, we describe two algorithms for estimating spatially invariant blur kernels. In the first method, we directly invert the transform, which is computationally efficient since it is not necessary to also estimate the latent sharp image. This approach is well suited for scenes with a diversity of edges, such as man-made environments. In the second method, we incorporate the Radon transform within the MAP estimation framework to jointly estimate the kernel and the image. While more expensive, this algorithm performs well on a broader variety of scenes, even when fewer edges can be observed. Our experiments show that our algorithms achieve comparable results to the state of the art in general and produce superior outputs on man-made scenes and photos degraded by a small kernel.


international conference on computational photography | 2010

Motion blur removal with orthogonal parabolic exposures

Taeg Sang Cho; Anat Levin; William T. Freeman

Object movement during exposure generates blur. Removing blur is challenging because one has to estimate the motion blur, which can spatially vary over the image. Even if the motion is successfully identified, blur removal can be unstable because the blur kernel attenuates high frequency image contents. We address the problem of removing blur from objects moving at constant velocities in arbitrary 2D directions. Our solution captures two images of the scene with a parabolic motion in two orthogonal directions. We show that our strategy near-optimally preserves image content, and allows for stable blur inversion. Taking two images of a scene helps us estimate spatially varying object motions. We present a prototype camera and demonstrate successful motion deblurring on real motions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

The Patch Transform

Taeg Sang Cho; Shai Avidan; William T. Freeman

The patch transform represents an image as a bag of overlapping patches sampled on a regular grid. This representation allows users to manipulate images in the patch domain, which then seeds the inverse patch transform to synthesize modified images. Possible modifications include the spatial locations of patches, the size of the output image, or the pool of patches from which an image is reconstructed. When no modifications are made, the inverse patch transform reduces to solving a jigsaw puzzle. The inverse patch transform is posed as a patch assignment problem on a Markov random field (MRF), where each patch should be used only once and neighboring patches should fit to form a plausible image. We find an approximate solution to the MRF using loopy belief propagation, introducing an approximation that encourages the solution to use each patch only once. The image reconstruction algorithm scales well with the total number of patches through label pruning. In addition, structural misalignment artifacts are suppressed through a patch jittering scheme that spatially jitters the assigned patches. We demonstrate the patch transform and its effectiveness on natural images.


custom integrated circuits conference | 2007

A Low Power Carbon Nanotube Chemical Sensor System

Taeg Sang Cho; Kyeong-Jae Lee; Jing Kong; Anantha P. Chandrakasan

This paper presents a hybrid CNT/CMOS chemical sensor system that comprises of a carbon nanotube sensor array and a CMOS interface chip. The full system, including the sensor, consumes 32 muW at 1.83 kS/s readout rate, accomplished through an extensive use of CAD tools and a model-based architecture optimization. A redundant use of CNT sensors in the frontend increases the reliability of the system.


IEEE Journal of Solid-state Circuits | 2009

A 32-

Taeg Sang Cho; Kyeong-Jae Lee; Jing Kong; Anantha P. Chandrakasan

This paper presents an energy-efficient chemical sensor system that uses carbon nanotubes (CNT) as the sensing medium. The room-temperature operation of CNT sensors eliminates the need for micro hot-plate arrays, which enables the low energy operation of the system. An array of redundant CNT sensors overcomes the reliability issues incurred by the CNT process variation. The sensor interface chip is designed to accommodate a 16-bit dynamic range by adaptively controlling an 8-bit DAC and a 10-bit ADC. A discrete optimization methodology determines the dynamic range of the DAC and the ADC to minimize the energy consumption of the system. A simple calibration technique using off-chip reference resistors reduces the DAC non-linearity. The sensor interface chip is designed in a 0.18-mum CMOS process and consumes, at maximum, 32 muW at 1.83 kS/s conversion rate. The designed interface achieves 1.34% measurement accuracy across the 10 kOmega-9 MOmega range. The functionality of the full system, including CNT sensors, has been successfully demonstrated.


international conference on computer vision | 2007

\mu

Taeg Sang Cho; William T. Freeman; Hensin Tsao

Mole pattern changes are important cues in detecting melanoma at an early stage. As a first step to automatically register mole pattern changes from skin images, this paper presents a framework to detect and label moles on skin images in the presence of clutter, occlusions, and varying imaging conditions. The input image is processed with cascaded blocks to successively discard non-mole pixels. Our method first searches the entire input image for skin regions using a non-parametric skin detection scheme, and the detected skin regions are further processed using a difference of Gaussian (DoG) filter to find possible mole candidates of varying sizes. Mole candidates are classified as moles in the final stage using a trained support vector machine. To increase the mole classification accuracy, hair is removed if present on the skin image using steerable filters and a graphical model. The performance of the designed system is evaluated with 28 test images, and the experimental results demonstrate the effectiveness of the proposed mole localization scheme.


design automation conference | 2008

W 1.83-kS/s Carbon Nanotube Chemical Sensor System

Taeg Sang Cho; Kyeong-Jae Lee; Jing Kong; Anantha P. Chandrakasan

This paper presents an energy efficient chemical sensor system that uses carbon nanotubes (CNT) as the sensor. The room-temperature operation of CNT sensors eliminates the need for micro hot-plate arrays, which enables the low energy operation of the system. The sensor interface chip is designed in a 0.18 mum CMOS process and consumes, at maximum, 32 muW at 1.83 kS/s conversion rate. The designed interface achieves 1.34% measurement accuracy over 10 kOmega -9 MOmega dynamic range. The functionality of the full system, including CNT sensors, has been successfully demonstrated.

Collaboration


Dive into the Taeg Sang Cho's collaboration.

Top Co-Authors

Avatar

Anantha P. Chandrakasan

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jing Kong

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Kyeong-Jae Lee

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Anat Levin

Weizmann Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Sand

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge