Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where E. Petsa is active.

Publication


Featured researches published by E. Petsa.


Photogrammetric Engineering and Remote Sensing | 2007

Generation of Orthoimages and Perspective Views with Automatic Visibility Checking and Texture Blending

G. Karras; L. Grammatikopoulos; I. Kalisperakis; E. Petsa

Conventional orthorectification software cannot handle surface occlusions and image visibility. The approach presented here synthesizes related work in photogrammetry and computer graphics/vision to automatically produce orthographic and perspective views based on fully 3D surface data (supplied by laser scanning). Surface occlusions in the direction of projection are detected to create the depth map of the new image. This information allows identifying, by visibility checking through back-projection of surface triangles, all source images which are entitled to contribute color to each pixel of the novel image. Weighted texture blending allows regulating the local radiometric contribution of each source image involved, while outlying color values are automatically discarded with a basic statistical test. Experimental results from a close-range project indicate that this fusion of laser scanning with multiview photogrammetry could indeed combine geometric accuracy with high visual quality and speed. A discussion of intended improvements of the algorithm is also included.


Videometrics, Range Imaging, and Applications XIII | 2015

Stereo matching based on census transformation of image gradients

C. Stentoumis; L. Grammatikopoulos; I. Kalisperakis; G. Karras; E. Petsa

Although multiple-view matching provides certain significant advantages regarding accuracy, occlusion handling and radiometric fidelity, stereo-matching remains indispensable for a variety of applications; these involve cases when image acquisition requires fixed geometry and limited number of images or speed. Such instances include robotics, autonomous navigation, reconstruction from a limited number of aerial/satellite images, industrial inspection and augmented reality through smart-phones. As a consequence, stereo-matching is a continuously evolving research field with growing variety of applicable scenarios. In this work a novel multi-purpose cost for stereo-matching is proposed, based on census transformation on image gradients and evaluated within a local matching scheme. It is demonstrated that when the census transformation is applied on gradients the invariance of the cost function to changes in illumination (non-linear) is significantly strengthened. The calculated cost values are aggregated through adaptive support regions, based both on cross-skeletons and basic rectangular windows. The matching algorithm is tuned for the parameters in each case. The described matching cost has been evaluated on the Middlebury stereo-vision 2006 datasets, which include changes in illumination and exposure. The tests verify that the census transformation on image gradients indeed results in a more robust cost function, regardless of aggregation strategy.


Videometrics, Range Imaging, and Applications XIII | 2015

3D city models completion by fusing lidar and image data

L. Grammatikopoulos; I. Kalisperakis; E. Petsa; C. Stentoumis

A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.


Archive | 2001

Photo-Textured Rendering of Developable Surfaces in Architectural Photogrammetry

G. Karras; E. Petsa; Amalia Dimarogona; Stefanos Kouroupis

New techniques constantly seek to meet a growing demand for realistic photo- textured 3D models in architecture and archaeology. When only visualisation is required, the emphasis lies on visual quality rather than accuracy. But in architectural photogrammetry the primary requirement is mostly to produce accurate mappings under strict specifications. Hence, accuracy comes first; yet visual products can benefit from precise modeling. In this context, a treatment of cylindrical and conic surfaces is presented. The process for develop- ing them onto the plane (the basic way to represent such surfaces) is outlined and illumina- ted at the example of two large ancient towers in the framework of projects prepared for the Greek Ministry of Culture. But besides their metric utility, these products represent ideal photo-textures for draping the mathematical surface to generate virtual reality effects. The results of such successful photorealistic visualisations are presented and discussed.


Isprs Journal of Photogrammetry and Remote Sensing | 2007

An automatic approach for camera calibration from vanishing points

L. Grammatikopoulos; G. Karras; E. Petsa


Archive | 2004

CAMERA CALIBRATION COMBINING IMAGES WITH TWO VANISHING POINTS

L. Grammatikopoulos; G. Karras; E. Petsa


Archive | 2004

ON AUTOMATIC ORTHOPROJECTION AND TEXTURE-MAPPING OF 3D SURFACE MODELS

L. Grammatikopoulos; I. Kalisperakis; G. Karras; T. Kokkinos; E. Petsa


Archive | 2002

GEOMETRIC INFORMATION FROM SINGLE UNCALIBRATED IMAGES OF ROADS

L. Grammatikopoulos; G. Karras; E. Petsa


Archive | 1997

RASTER PROJECTION AND DEVELOPMENT OF CURVED SURFACES

G. Karras; Petros Patias; E. Petsa; Kostas Ketipis


Archive | 2003

CAMERA CALIBRATION APPROACHES USING SINGLE IMAGES OF MAN-MADE OBJECTS

L. Grammatikopoulos; G. Karras; E. Petsa

Collaboration


Dive into the E. Petsa's collaboration.

Top Co-Authors

Avatar

G. Karras

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

I. Kalisperakis

Technological Educational Institute of Athens

View shared research outputs
Top Co-Authors

Avatar

L. Grammatikopoulos

Technological Educational Institute of Athens

View shared research outputs
Top Co-Authors

Avatar

C. Stentoumis

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

D. Mavromati

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

Petros Patias

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

T. Kokkinos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

A. Prokos

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

A. Psalta

National Technical University of Athens

View shared research outputs
Top Co-Authors

Avatar

A. Tranou

National Technical University of Athens

View shared research outputs
Researchain Logo
Decentralizing Knowledge