Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Orazio Gallo is active.

Publication


Featured researches published by Orazio Gallo.


international conference on computational photography | 2009

Artifact-free High Dynamic Range imaging

Orazio Gallo; Natasha Gelfandz; Marius Tico; Kari Pulli

The contrast in real world scenes is often beyond what consumer cameras can capture. For these situations, High Dynamic Range (HDR) images can be generated by taking multiple exposures of the same scene. When fusing information from different images, however, the slightest change in the scene can generate artifacts which dramatically limit the potential of this solution. We present a technique capable of dealing with a large amount of movement in the scene: we find, in all the available exposures, patches consistent with a reference image previously selected from the stack. We generate the HDR image by averaging the radiance estimates of all such regions and we compensate for camera calibration errors by removing potential seams. We show that our method works even in cases when many moving objects cover large regions of the scene.


computer vision and pattern recognition | 2013

HDR Deghosting: How to Deal with Saturation?

Jun Hu; Orazio Gallo; Kari Pulli; Xiaobai Sun

We present a novel method for aligning images in an HDR (high-dynamic-range) image stack to produce a new exposure stack where all the images are aligned and appear as if they were taken simultaneously, even in the case of highly dynamic scenes. Our method produces plausible results even where the image used as a reference is either too dark or bright to allow for an accurate registration.


international conference on computer graphics and interactive techniques | 2014

FlexISP: a flexible camera image processing framework

Felix Heide; Markus Steinberger; Yun-Ta Tsai; Mushfiqur Rouf; Dawid Pająk; Dikpal Reddy; Orazio Gallo; Jing Liu; Wolfgang Heidrich; Karen O. Egiazarian; Jan Kautz; Kari Pulli

Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.


IEEE Transactions on Computational Imaging | 2017

Loss Functions for Image Restoration With Neural Networks

Hang Zhao; Orazio Gallo; Iuri Frosio; Jan Kautz

Neural networks are becoming central in several areas of computer vision and image processing and different architectures have been proposed to solve specific problems. The impact of the loss layer of neural networks, however, has not received much attention in the context of image processing: the default and virtually only choice is


computer vision and pattern recognition | 2008

Robust curb and ramp detection for safe parking using the Canesta TOF camera

Orazio Gallo; Roberto Manduchi; Abbas Rafii

\ell _2


european conference on computer vision | 2012

Exposure stacks of live scenes with hand-held cameras

Jun Hu; Orazio Gallo; Kari Pulli

. In this paper, we bring attention to alternative choices for image restoration. In particular, we show the importance of perceptually-motivated losses when the resulting image is to be evaluated by a human observer. We compare the performance of several losses, and propose a novel, differentiable error function. We show that the quality of the results improves significantly with better loss functions, even when the network architecture is left unchanged.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2011

Reading 1D Barcodes with Mobile Phones Using Deformable Templates

Orazio Gallo; Roberto Manduchi

Range sensors for assisted backup and parking have potential for saving human lives and for facilitating parking in challenging situations. However, important features such as curbs and ramps are difficult to detect using ultrasonic or microwave sensors. TOF imaging range sensors may be used successfully for this purpose. In this paper we present a study concerning the use of the Canesta TOF camera for recognition of curbs and ramps. Our approach is based on the detection of individual planar patches using CC-RANSAC, a modified version of the classic RANSAC robust regression algorithm. Whereas RANSAC uses the whole set of inliers to evaluate the fitness of a candidate plane, CC-RANSAC only considers the largest connected components of inliers. We provide experimental evidence that CC-RANSAC provides a more accurate estimation of the dominant plane than RANSAC with a smaller number of iterations.


Computer Graphics Forum | 2012

Metering for Exposure Stacks

Orazio Gallo; Marius Tico; Roberto Manduchi; Natasha Gelfand; Kari Pulli

Many computational photography applications require the user to take multiple pictures of the same scene with different camera settings. While this allows to capture more information about the scene than what is possible with a single image, the approach is limited by the requirement that the images be perfectly registered. In a typical scenario the camera is hand-held and is therefore prone to moving during the capture of an image burst, while the scene is likely to contain moving objects. Combining such images without careful registration introduces annoying artifacts in the final image. This paper presents a method to register exposure stacks in the presence of both camera motion and scene changes. Our approach warps and modifies the content of the images in the stack to match that of a reference image. Even in the presence of large, highly non-rigid displacements we show that the images are correctly registered to the reference.


international conference on image processing | 2008

Camera-based pointing interface for mobile devices

Orazio Gallo; Sonia M. Arteaga; James Davis

Camera cellphones have become ubiquitous, thus opening a plethora of opportunities for mobile vision applications. For instance, they can enable users to access reviews or price comparisons for a product from a picture of its barcode while still in the store. Barcode reading needs to be robust to challenging conditions such as blur, noise, low resolution, or low-quality camera lenses, all of which are extremely common. Surprisingly, even state-of-the-art barcode reading algorithms fail when some of these factors come into play. One reason resides in the early commitment strategy that virtually all existing algorithms adopt: The image is first binarized and then only the binary data are processed. We propose a new approach to barcode decoding that bypasses binarization. Our technique relies on deformable templates and exploits all of the gray-level information of each pixel. Due to our parameterization of these templates, we can efficiently perform maximum likelihood estimation independently on each digit and enforce spatial coherence in a subsequent step. We show by way of experiments on challenging UPC-A barcode images from five different databases that our approach outperforms competing algorithms. Implemented on a Nokia N95 phone, our algorithm can localize and decode a barcode on a VGA image (640 × 480, JPEG compressed) in an average time of 400-500 ms.


ACM Transactions on Graphics | 2015

Simulating the Visual Experience of Very Bright and Very Dark Scenes

David E. Jacobs; Orazio Gallo; Emily A. Cooper; Kari Pulli; Marc Levoy

When creating a High‐Dynamic‐Range (HDR) image from a sequence of differently exposed Low‐Dynamic‐Range (LDR) images, the set of LDR images is usually generated by sampling the space of exposure times with a geometric progression and without explicitly accounting for the distribution of irradiance values of the scene. We argue that this choice can produce sub‐optimal results both in terms of the number of acquired pictures and the quality of the resulting HDR image.

Collaboration


Dive into the Orazio Gallo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge