Gianpaolo Palma
Istituto di Scienza e Tecnologie dell'Informazione
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gianpaolo Palma.
Computer Graphics Forum | 2012
Gianpaolo Palma; Marco Callieri; Matteo Dellepiane; Roberto Scopigno
We present a statistical method for the estimation of the Spatially Varying Bidirectional Reflectance Distribution Function (SVBRDF) of an object with complex geometry, starting from video sequences acquired with fixed but general lighting conditions. The aim of this work is to define a method that simplifies the acquisition phase of the object surface appearance and allows to reconstruct an approximated SVBRDF. The final output is suitable to be used with a 3D model of the object to obtain accurate and photo‐realistic renderings. The method is composed by three steps: the approximation of the environment map of the acquisition scene, using the same object as a probe; the estimation of the diffuse color of the object; the estimation of the specular components of the main materials of the object, by using a Phong model. All the steps are based on statistical analysis of the color samples projected by the video sequences on the surface of the object. Although the method presents some limitations, the trade‐off between the easiness of acquisition and the obtained results makes it useful for practical applications.
digital heritage international congress | 2013
Gianpaolo Palma; Nicola Desogus; Paolo Cignoni; Roberto Scopigno
This paper presents an algorithm for the estimation of the Surface Light Field using video sequences acquired moving the camera around the object. Unlike other state of the art methods, it does not require a uniform sampling density of the view directions, but it is able to build an approximation of the Surface Light Field starting from a biased video acquisition: dense along the camera path and completely missing in the other directions. The main idea is to separate the estimation of two components: the diffuse color, computed using statistical operations that allow the estimation of a rough approximation of the direction of the main light sources in the acquisition environment; the other residual Surface Light Field effects, modeled as linear combination of spherical functions. From qualitative and numerical evaluations, the final rendering results show a high fidelity and similarity with the input video frames, without ringing and banding effects.
Computer Graphics Forum | 2016
Gianpaolo Palma; Paolo Cignoni; Tamy Boubekeur; Roberto Scopigno
Detecting geometric changes between two 3D captures of the same location performed at different moments is a critical operation for all systems requiring a precise segmentation between change and no‐change regions. Such application scenarios include 3D surface reconstruction, environment monitoring, natural events management and forensic science. Unfortunately, typical 3D scanning setups cannot provide any one‐to‐one mapping between measured samples in static regions: in particular, both extrinsic and intrinsic sensor parameters may vary over time while sensor noise and outliers additionally corrupt the data. In this paper, we adopt a multi‐scale approach to robustly tackle these issues. Starting from two point clouds, we first remove outliers using a probabilistic operator. Then, we detect the actual change using the implicit surface defined by the point clouds under a Growing Least Square reconstruction that, compared to the classical proximity measure, offers a more robust change/no‐change characterization near the temporal intersection of the scans and in the areas exhibiting different sampling density and direction. The resulting classification is enhanced with a spatial reasoning step to solve critical geometric configurations that are common in man‐made environments. We validate our approach on a synthetic test case and on a collection of real data sets acquired using commodity hardware. Finally, we show how 3D reconstruction benefits from the resulting precise change/no‐change segmentation.
visual analytics science and technology | 2012
Jaime Kaminski; Karina Rodriguez Echavarria; David B. Arnold; Gianpaolo Palma; Roberto Scopigno; Marc Proesmans; James Stevenson
This paper presents three different propositions for cultural heritage organisations on how to digitise objects in 3D. It is based on the practical evaluation of three different deployment experiments that use different methods and business models for mass 3D-acquisition. These models are: developing the skills of in-house staff within an organisation, the use of external professionals and using crowdsourcing as a mechanism for developing the 3D collection. Furthermore, the paper provides an analysis of these models, lessons learned and practical recommendations for cultural heritage organisations. The analysis includes considerations of issues such as strategy, size of the organisation, skills, equipment, object accessibility and complexity as well as the cost, time and quality of the 3D technology. The paper concludes that most organisations are able to develop 3D collections but variations in the result will be reflected by the strategic approach they place on innovative 3D technologies.
vision modeling and visualization | 2010
Gianpaolo Palma; Marco Callieri; Matteo Dellepiane; Massimiliano Corsini; Roberto Scopigno
We present a new method for the accurate registration of video sequences of a real object over its dense triangular mesh. The goal is to obtain an accurate video-to-geometry registration to allow the bidirectional data transfer between the 3D model and the video using the perspective projection defined by the camera model. Our solution uses two different approaches: feature-based registration by KLT video tracking, and statistic-based registration by maximizing the Mutual Information (MI) between the gradient of the frame and the gradient of the rendering of the 3D model with some illumination related properties, such as surface normals and ambient occlusion. While the first approach allows a fast registration of short sequences with simple camera movements, the MI is used to correct the drift problem that KLT tracker produces over long sequences, due to the incremental tracking and the camera motion. We demonstrate, using synthetic sequences, that the alignment error obtained with our method is smaller than the one introduced by KLT, and we show the results of some interesting and challenging real sequences of objects of different sizes, acquired under different conditions.
eurographics, italian chapter conference | 2010
Gianpaolo Palma; Massimiliano Corsini; Matteo Dellepiane; Roberto Scopigno
In this paper we propose an extension for the algorithms of image-to-geometry registration by Mutual Information(MI) to improve the performance and the quality of the alignment. Proposed for the registration of multi modal medical images, in the last years MI has been adapted to align a 3D model to a given image by using different renderings of the model and a gray-scale version of the input image. A key aspect is the choice of the rendering process to correlate the 3D model to the image without taking into account the texture data and the lighting conditions. Even if several rendering types for the 3D model have been analyzed, in some cases the alignment fails for two main reasons: the peculiar reflection behavior of the object that we are not able to reproduce in the rendering of the 3D model without knowing the material characteristics of the object and the lighting conditions of the acquisition environment; the characteristics of the image background, especially non uniform background, that can degrade the convergence of the registration. To improve the quality of the registration in these cases we propose to compute the MI between the gradient map of the 3D rendering and the gradient map of the image in order to maximize the shared data between them.
Computer Graphics Forum | 2018
Gianpaolo Palma; Manuele Sabbadin; Massimiliano Corsini; Paolo Cignoni
The wide availability of 3D acquisition devices makes viable their use for shape monitoring. The current techniques for the analysis of time‐varying data can efficiently detect actual significant geometric changes and rule out differences due to irrelevant variations (such as sampling, lighting and coverage). On the other hand, the effective visualization of such detected changes can be challenging when we want to show at the same time the original appearance of the 3D model. In this paper, we propose a dynamic technique for the effective visualization of detected differences between two 3D scenes. The presented approach, while retaining the original appearance, allows the user to switch between the two models in a way that enhances the geometric differences that have been detected as significant. Additionally, the same technique is able to visually hides the other negligible, yet visible, variations. The main idea is to use two distinct screen space time‐based interpolation functions for the significant 3D differences and for the small variations to hide. We have validated the proposed approach in a user study on a different class of datasets, proving the objective and subjective effectiveness of the method.
eurographics | 2016
Manuele Sabbadin; Gianpaolo Palma; Paolo Cignoni; Roberto Scopigno
The correct understanding of the 3D shape is a crucial aspect to improve the 3D scanning process, especially in order to perform high quality and as complete as possible 3D acquisitions on the field. The paper proposes a new technique to enhance the visualization of raw scanning data based on the definition in device space of a Multi-View Ambient Occlusion (MVAO). The approach allows improving the comprehension of the 3D shape of the input geometry and, requiring almost no preprocessing, it can be directly applied to raw captured point clouds. The algorithm has been tested on different datasets: high resolution Time-of-Flight scans and streams of low quality range maps from a depth camera. The results enhance the details perception in the 3D geometry using the multi-view information to make more robust the ambient occlusion estimation.
CAA 2012 | 2011
Gianpaolo Palma; Roberto Scopigno; Eliana Siotto; Sabrina Batino; Monica Baldassarri; Marc Proesmans
2015 Digital Heritage | 2015
Eliana Siotto; Gianpaolo Palma; Marco Potenziani; Roberto Scopigno