Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Connelly Barnes is active.

Publication


Featured researches published by Connelly Barnes.


international conference on computer graphics and interactive techniques | 2009

PatchMatch: a randomized correspondence algorithm for structural image editing

Connelly Barnes; Eli Shechtman; Adam Finkelstein; Dan B. Goldman

This paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches. Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools. However, the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance. Our algorithm offers substantial performance improvements over the previous state of the art (20-100x), enabling its use in interactive editing tools. The key insights driving the algorithm are that some good patch matches can be found via random sampling, and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas. We offer theoretical analysis of the convergence properties of the algorithm, as well as empirical and practical evidence for its high quality and performance. This one simple algorithm forms the basis for a variety of tools -- image retargeting, completion and reshuffling -- that can be used together in the context of a high-level image editing application. Finally, we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.


european conference on computer vision | 2010

The generalized patchmatch correspondence algorithm

Connelly Barnes; Eli Shechtman; Dan B. Goldman; Adam Finkelstein

PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.


programming language design and implementation | 2013

Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines

Jonathan Ragan-Kelley; Connelly Barnes; Andrew Adams; Sylvain Paris; Saman P. Amarasinghe

Image processing pipelines combine the challenges of stencil computations and stream programs. They are composed of large graphs of different stencil stages, as well as complex reductions, and stages with global or data-dependent access patterns. Because of their complex structure, the performance difference between a naive implementation of a pipeline and an optimized one is often an order of magnitude. Efficient implementations require optimization of both parallelism and locality, but due to the nature of stencils, there is a fundamental tension between parallelism, locality, and introducing redundant recomputation of shared values. We present a systematic model of the tradeoff space fundamental to stencil pipelines, a schedule representation which describes concrete points in this space for each stage in an image processing pipeline, and an optimizing compiler for the Halide image processing language that synthesizes high performance implementations from a Halide algorithm and a schedule. Combining this compiler with stochastic search over the space of schedules enables terse, composable programs to achieve state-of-the-art performance on a wide range of real image processing pipelines, and across different hardware architectures, including multicores with SIMD, and heterogeneous CPU+GPU execution. From simple Halide programs written in a few hours, we demonstrate performance up to 5x faster than hand-tuned C, intrinsics, and CUDA implementations optimized by experts over weeks or months, for image processing applications beyond the reach of past automatic compilers.


international conference on computer graphics and interactive techniques | 2012

Image melding: combining inconsistent images using patch-based synthesis

Soheil Darabi; Eli Shechtman; Connelly Barnes; Dan B. Goldman; Pradeep Sen

Current methods for combining two different images produce visible artifacts when the sources have very different textures and structures. We present a new method for synthesizing a transition region between two source images, such that inconsistent color, texture, and structural properties all change gradually from one source to the other. We call this process image melding. Our method builds upon a patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L2/L0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization. In several cases, our unified method outperforms previous state-of-the-art methods specifically designed for those applications.


international conference on computer graphics and interactive techniques | 2007

Digital bas-relief from 3D scenes

Tim Weyrich; Jia Deng; Connelly Barnes; Szymon Rusinkiewicz; Adam Finkelstein

We present a system for semi-automatic creation of bas-relief sculpture. As an artistic medium, relief spans the continuum between 2D drawing or painting and full 3D sculpture. Bas-relief (or low relief) presents the unique challenge of squeezing shapes into a nearly-flat surface while maintaining as much as possible the perception of the full 3D scene. Our solution to this problem adapts methods from the tone-mapping literature, which addresses the similar problem of squeezing a high dynamic range image into the (low) dynamic range available on typical display devices. However, the bas-relief medium imposes its own unique set of requirements, such as maintaining small, fixed-size depth discontinuities. Given a 3D model, camera, and a few parameters describing the relative attenuation of different frequencies in the shape, our system creates a relief that gives the illusion of the 3D shape from a given vantage point while conforming to a greatly compressed height.


IEEE Transactions on Education | 2008

Enhancement of Student Learning in Experimental Design Using a Virtual Laboratory

Milo Koretsky; Danielle Amatore; Connelly Barnes; Sho Kimura

This paper describes the instructional design, implementation, and assessment of a virtual laboratory based on a numerical simulation of a chemical vapor deposition (CVD) process, the virtual CVD laboratory. The virtual CVD laboratory provides a capstone experience in which students synthesize engineering science and statistics principles and have the opportunity to apply experimental design in the context similar to that of a practicing engineer in industry with a wider design space than is typically seen in the undergraduate laboratory. The simulation of the reactor is based on fundamental principles of mass transfer and chemical reaction, obscured by added ldquonoise.rdquo The software application contains a 3-D student client that simulates a cleanroom environment, an instructor Web interface with integrated assessment tools, and a database server. As opposed to being constructed as a direct one-to-one replacement, this virtual laboratory is intended to complement the physical laboratories in the curriculum so that certain specific elements of student learning can be enhanced. Implementation in four classes is described. Assessment demonstrates students are using an iterative experimental design process reflective of practicing engineers and correlates success in this project to higher order thinking skills. Student surveys indicate that students perceived the virtual CVD laboratory as the most effective learning medium used, even above physical laboratories.


international conference on computer graphics and interactive techniques | 2014

Style transfer for headshot portraits

YiChang Shih; Sylvain Paris; Connelly Barnes; William T. Freeman

Headshot portraits are a popular subject in photography but to achieve a compelling visual style requires advanced skills that a casual photographer will not have. Further, algorithms that automate or assist the stylization of generic photographs do not perform well on headshots due to the feature-specific, local retouching that a professional photographer typically applies to generate such portraits. We introduce a technique to transfer the style of an example headshot photo onto a new one. This can allow one to easily reproduce the look of renowned artists. At the core of our approach is a new multiscale technique to robustly transfer the local statistics of an example portrait onto a new one. This technique matches properties such as the local contrast and the overall lighting direction while being tolerant to the unavoidable differences between the faces of two different people. Additionally, because artists sometimes produce entire headshot collections in a common style, we show how to automatically find a good example to use as a reference for a given portrait, enabling style transfer without the user having to search for a suitable example for each input. We demonstrate our approach on data taken in a controlled environment as well as on a large set of photos downloaded from the Internet. We show that we can successfully handle styles by a variety of different artists.


international conference on computer graphics and interactive techniques | 2010

Video tapestries with continuous temporal zoom

Connelly Barnes; Dan B. Goldman; Eli Shechtman; Adam Finkelstein

We present a novel approach for summarizing video in the form of a multiscale image that is continuous in both the spatial domain and across the scale dimension: There are no hard borders between discrete moments in time, and a user can zoom smoothly into the image to reveal additional temporal details. We call these artifacts tapestries because their continuous nature is akin to medieval tapestries and other narrative depictions predating the advent of motion pictures. We propose a set of criteria for such a summarization, and a series of optimizations motivated by these criteria. These can be performed as an entirely offline computation to produce high quality renderings, or by adjusting some optimization parameters the later stages can be solved in real time, enabling an interactive interface for video navigation. Our video tapestries combine the best aspects of two common visualizations, providing the visual clarity of DVD chapter menus with the information density and multiple scales of a video editing timeline representation. In addition, they provide continuous transitions between zoom levels. In a user study, participants preferred both the aesthetics and efficiency of tapestries over other interfaces for visual browsing.


international conference on computer graphics and interactive techniques | 2013

Patch-based high dynamic range video

Nima Khademi Kalantari; Eli Shechtman; Connelly Barnes; Soheil Darabi; Dan B. Goldman; Pradeep Sen

Despite significant progress in high dynamic range (HDR) imaging over the years, it is still difficult to capture high-quality HDR video with a conventional, off-the-shelf camera. The most practical way to do this is to capture alternating exposures for every LDR frame and then use an alignment method based on optical flow to register the exposures together. However, this results in objectionable artifacts whenever there is complex motion and optical flow fails. To address this problem, we propose a new approach for HDR reconstruction from alternating exposure video sequences that combines the advantages of optical flow and recently introduced patch-based synthesis for HDR images. We use patch-based synthesis to enforce similarity between adjacent frames, increasing temporal continuity. To synthesize visually plausible solutions, we enforce constraints from motion estimation coupled with a search window map that guides the patch-based synthesis. This results in a novel reconstruction algorithm that can produce high-quality HDR videos with a standard camera. Furthermore, our method is able to synthesize plausible texture and motion in fast-moving regions, where either patch-based synthesis or optical flow alone would exhibit artifacts. We present results of our reconstructed HDR video sequences that are superior to those produced by current approaches.


international conference on computer graphics and interactive techniques | 2008

Video puppetry: a performative interface for cutout animation

Connelly Barnes; David E. Jacobs; Jason Sanders; Dan B. Goldman; Szymon Rusinkiewicz; Adam Finkelstein; Maneesh Agrawala

We present a video-based interface that allows users of all skill levels to quickly create cutout-style animations by performing the character motions. The puppeteer first creates a cast of physical puppets using paper, markers and scissors. He then physically moves these puppets to tell a story. Using an inexpensive overhead camera our system tracks the motions of the puppets and renders them on a new background while removing the puppeteers hands. Our system runs in real-time (at 30 fps) so that the puppeteer and the audience can immediately see the animation that is created. Our system also supports a variety of constraints and effects including articulated characters, multi-track animation, scene changes, camera controls, 2 1/2-D environments, shadows, and animation cycles. Users have evaluated our system both quantitatively and qualitatively: In tests of low-level dexterity, our system has similar accuracy to a mouse interface. For simple story telling, users prefer our system over either a mouse interface or traditional puppetry. We demonstrate that even first-time users, including an eleven-year-old, can use our system to quickly turn an original story idea into an animation.

Collaboration


Dive into the Connelly Barnes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge