Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Soheil Darabi is active.

Publication


Featured researches published by Soheil Darabi.


international conference on computer graphics and interactive techniques | 2012

Image melding: combining inconsistent images using patch-based synthesis

Soheil Darabi; Eli Shechtman; Connelly Barnes; Dan B. Goldman; Pradeep Sen

Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color, texture, and structural properties all change gradually from one source to the other. We call this process image melding. Our method builds upon a patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L2/L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. In several cases, our unified method outperforms previous state-of-the-art methods specifically designed for those applications.


international conference on computer graphics and interactive techniques | 2012

Robust patch-based hdr reconstruction of dynamic scenes

Pradeep Sen; Nima Khademi Kalantari; Maziar Yaesoubi; Soheil Darabi; Dan B. Goldman; Eli Shechtman

High dynamic range (HDR) imaging from a set of sequential exposures is an easy way to capture high-quality images of static scenes, but suffers from artifacts for scenes with significant motion. In this paper, we propose a new approach to HDR reconstruction that draws information from all the exposures but is more robust to camera/scene motion than previous techniques. Our algorithm is based on a novel patch-based energy-minimization formulation that integrates alignment and reconstruction in a joint optimization through an equation we call the HDR image synthesis equation. This allows us to produce an HDR result that is aligned to one of the exposures yet contains information from all of them. We present results that show considerable improvement over previous approaches.


Computer Graphics Forum | 2009

Compressive Dual Photography

Pradeep Sen; Soheil Darabi

The accurate measurement of the light transport characteristics of a complex scene is an important goal in computer graphics and has applications in relighting and dual photography. However, since the light transport data sets are typically very large, much of the previous research has focused on adaptive algorithms that capture them efficiently. In this work, we propose a novel, non‐adaptive algorithm that takes advantage of the compressibility of the light transport signal in a transform domain to capture it with less acquisitions than with standard approaches. To do this, we leverage recent work in the area of compressed sensing, where a signal is reconstructed from a few samples assuming that it is sparse in a transform domain. We demonstrate our approach by performing dual photography and relighting by using a much smaller number of acquisitions than would normally be needed. Because our algorithm is not adaptive, it is also simpler to implement than many of the current approaches.


ACM Transactions on Graphics | 2012

On filtering the noise from the random parameters in Monte Carlo rendering

Pradeep Sen; Soheil Darabi

Monte Carlo (MC) rendering systems can produce spectacular images but are plagued with noise at low sampling rates. In this work, we observe that this noise occurs in regions of the image where the sample values are a direct function of the random parameters used in the Monte Carlo system. Therefore, we propose a way to identify MC noise by estimating this functional relationship from a small number of input samples. To do this, we treat the rendering system as a black box and calculate the statistical dependency between the outputs and inputs of the system. We then use this information to reduce the importance of the sample values affected by MC noise when applying an image-space, cross-bilateral filter, which removes only the noise caused by the random parameters but preserves important scene detail. The process of using the functional relationships between sample values and the random parameter inputs to filter MC noise is called Random Parameter Filtering (RPF), and we demonstrate that it can produce images in a few minutes that are comparable to those rendered with a thousand times more samples. Furthermore, our algorithm is general because we do not assign any physical meaning to the random parameters, so it works for a wide range of Monte Carlo effects, including depth of field, area light sources, motion blur, and path-tracing. We present results for still images and animated sequences at low sampling rates that have higher quality than those produced with previous approaches.


international conference on computer graphics and interactive techniques | 2013

Patch-based high dynamic range video

Nima Khademi Kalantari; Eli Shechtman; Connelly Barnes; Soheil Darabi; Dan B. Goldman; Pradeep Sen

Despite significant progress in high dynamic range (HDR) imaging over the years, it is still difficult to capture high-quality HDR video with a conventional, off-the-shelf camera. The most practical way to do this is to capture alternating exposures for every LDR frame and then use an alignment method based on optical flow to register the exposures together. However, this results in objectionable artifacts whenever there is complex motion and optical flow fails. To address this problem, we propose a new approach for HDR reconstruction from alternating exposure video sequences that combines the advantages of optical flow and recently introduced patch-based synthesis for HDR images. We use patch-based synthesis to enforce similarity between adjacent frames, increasing temporal continuity. To synthesize visually plausible solutions, we enforce constraints from motion estimation coupled with a search window map that guides the patch-based synthesis. This results in a novel reconstruction algorithm that can produce high-quality HDR videos with a standard camera. Furthermore, our method is able to synthesize plausible texture and motion in fast-moving regions, where either patch-based synthesis or optical flow alone would exhibit artifacts. We present results of our reconstructed HDR video sequences that are superior to those produced by current approaches.


IEEE Transactions on Visualization and Computer Graphics | 2011

Compressive Rendering: A Rendering Application of Compressed Sensing

Pradeep Sen; Soheil Darabi

Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics, it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75 percent of the pixel samples using a nonadaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity.


asilomar conference on signals, systems and computers | 2009

Compressive image super-resolution

Pradeep Sen; Soheil Darabi

This paper proposes a new algorithm to generate a super-resolution image from a single, low-resolution input without the use of a training data set. We do this by exploiting the fact that the image is highly compressible in the wavelet domain and leverage recent results of compressed sensing (CS) theory to make an accurate estimate of the original high-resolution image. Unfortunately, traditional CS approaches do not allow direct use of a wavelet compression basis because of the coherency between the point-samples from the downsampling process and the wavelet basis. To overcome this problem, we incorporate the downsampling low-pass filter into our measurement matrix, which decreases coherency between the bases. To invert the downsampling process, we use the appropriate inverse filter and solve for the high-resolution image using a greedy, matching-pursuit algorithm. The result is a simple and efficient algorithm that can generate high quality, high-resolution images without the use of training data. We present results that show the improved performance of our method over existing super-resolution approaches.


international conference on image processing | 2009

A novel framework for imaging using compressed sensing

Pradeep Sen; Soheil Darabi

Recently, there has been growing interest in using compressed sensing to perform imaging. Most of these algorithms capture the image of a scene by taking projections of the imaged scene with a large set of different random patterns. Unfortunately, these methods require thousands of serial measurements in order to reconstruct a high quality image, which makes them impractical for most real-world imaging applications. In this work, we explore the idea of performing sparse image capture from a single image taken in one moment of time. Our framework measures a subset of the pixels in the photograph and uses compressed sensing algorithms to reconstruct the entire image from this data. The benefit of our approach is that we can get a high-quality image while reducing the bandwidth of the imaging device because we only read a fraction of the pixels, not the entire array. Our approach can also be used to accurately fill in the missing pixel information for sensor arrays with defective pixels. We demonstrate better reconstructions of test images using our approach than with traditional reconstruction methods.


international conference on image processing | 2010

Compressed sensing for aperture synthesis imaging

Stephan Wenger; Soheil Darabi; Pradeep Sen; Karl-Heinz Glassmeier; Marcus A. Magnor

The theory of compressed sensing has a natural application in interferometric aperture synthesis. As in many real-world applications, however, the assumption of random sampling, which is elementary to many propositions of this theory, is not met. Instead, the induced sampling patterns exhibit a large degree of regularity. In this paper, we statistically quantify the effects of this kind of regularity for the problem of radio interferometry where astronomical images are sparsely sampled in the frequency domain. Based on the favorable results of our statistical evaluation, we present a practical method for interferometric image reconstruction that is evaluated on observational data from the Very Large Array (VLA) telescope.


eurographics | 2010

Compressive estimation for signal integration in rendering

Pradeep Sen; Soheil Darabi

In rendering applications, we are often faced with the problem of computing the integral of an unknown function. Typical approaches used to estimate these integrals are often based on Monte Carlo methods that slowly converge to the correct answer after many point samples have been taken. In this work, we study this problem under the framework of compressed sensing and reach the conclusion that if the signal is sparse in a transform domain, we can evaluate the integral accurately using a small set of point samples without requiring the lengthy iterations of Monte Carlo approaches. We demonstrate the usefulness of our framework by proposing novel algorithms to address two problems in computer graphics: image antialiasing and motion blur. We show that we can use our framework to generate good results with fewer samples than is possible with traditional approaches.

Collaboration


Dive into the Soheil Darabi's collaboration.

Top Co-Authors

Avatar

Pradeep Sen

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maziar Yaesoubi

The Mind Research Network

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Steve Bako

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge