Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Charalambos Poullis is active.

Publication


Featured researches published by Charalambos Poullis.


computer vision and pattern recognition | 2009

Automatic reconstruction of cities from remote sensor data

Charalambos Poullis; Suya You

In this paper, we address the complex problem of rapid modeling of large-scale areas and present a novel approach for the automatic reconstruction of cities from remote sensor data. The goal in this work is to automatically create lightweight, watertight polygonal 3D models from LiDAR data (Light Detection and Ranging) captured by an airborne scanner. This is achieved in three steps: preprocessing, segmentation and modeling, as shown in Figure 1. Our main technical contributions in this paper are: (i) a novel, robust, automatic segmentation technique based on the statistical analysis of the geometric properties of the data, which makes no particular assumptions about the input data, thus having no data dependencies, and (ii) an efficient and automatic modeling pipeline for the reconstruction of large-scale areas containing several thousands of buildings. We have extensively tested the proposed approach with several city-size datasets including downtown Baltimore, downtown Denver, the city of Atlanta, downtown Oakland, and we present and evaluate the experimental results.


IEEE Transactions on Visualization and Computer Graphics | 2009

Photorealistic Large-Scale Urban City Model Reconstruction

Charalambos Poullis; Suya You

The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

A Framework for Automatic Modeling from Point Cloud Data

Charalambos Poullis

We propose a complete framework for the automatic modeling from point cloud data. Initially, the point cloud data are preprocessed into manageable datasets, which are then separated into clusters using a novel two-step, unsupervised clustering algorithm. The boundaries extracted for each cluster are then simplified and refined using a fast energy minimization process. Finally, three-dimensional models are generated based on the roof outlines. The proposed framework has been extensively tested, and the results are reported.


international conference on 3d imaging, modeling, processing, visualization & transmission | 2011

3D Reconstruction of Urban Areas

Charalambos Poullis; Suya You

Virtual representations of real world areas are increasingly being employed in a variety of different applications such as urban planning, personnel training, simulations, etc. Despite the increasing demand for such realistic 3D representations, it still remains a very hard and often manual process. In this paper, we address the problem of creating photo realistic 3D scene models for large-scale areas and present a complete system. The proposed system comprises of two main components: (1) A reconstruction pipeline which employs a fully automatic technique for extracting and producing high-fidelity geometric models directly from Light Detection and Ranging (LiDAR) data and (2) A flexible texture blending technique for generating high-quality photo realistic textures by fusing information from multiple optical sensor resources. The result is a photo realistic 3D representation of large-scale areas(city-size) of the real-world. We have tested the proposed system extensively with many city-size datasets which confirms the validity and robustness of the approach. The reported results verify that the system is a consistent work flow that allows non-expert and non-artists to rapidly fuse aerial LiDAR and imagery to construct photo realistic 3D scene models.


ieee virtual reality conference | 2008

Rapid Creation of Large-scale Photorealistic Virtual Environments

Charalambos Poullis; Suya You; Ulrich Neumann

The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel parameterized geometric primitive is presented for the automatic building detection, identification and reconstruction of building structures. In addition, buildings with complex roofs containing non-linear surfaces are reconstructed interactively using a nonlinear primitive. Secondly, we present a rendering pipeline for the composition of photorealistic textures which unlike existing techniques it can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial and satellite).


international conference on advanced learning technologies | 2015

Effectiveness of an Immersive Virtual Environment (CAVE) for Teaching Pedestrian Crossing to Children with PDD-NOS

Aimilia Tzanavari; Nefi Charalambous-Darden; Kyriakos Herakleous; Charalambos Poullis

Children with Autism Spectrum Disorders (ASD) exhibit a range of developmental disabilities, with mild to severe effects in social interaction and communication. Children with PDD-NOS, Autism and co-existing conditions are facing enormous challenges in their lives, dealing with their difficulties in sensory perception, repetitive behaviors and interests. These challenges result in them being less independent or not independent at all. Part of becoming independent involves being able to function in real world settings, settings that are not controlled. Pedestrian crossings fall under this category: as children (and later as adults) they have to learn to cross roads safely. In this paper, we report on a study we carried out with 6 children with PDD-NOS over a period of four (4) days using a VR CAVE virtual environment to teach them how to safely cross at a pedestrian crossing. Results indicated that most children were able to achieve the desired goal of learning the task, which was verified in the end of the 4-day period by having them cross a real pedestrian crossing (albeit with their parent/educator discretely next to them for safety reasons).


ACM Journal on Computing and Cultural Heritage | 2015

Visualizing and Assessing Hypotheses for Marine Archaeology in a VR CAVE Environment

Irene Katsouri; Aimilia Tzanavari; Kyriakos Herakleous; Charalambos Poullis

The understanding and reconstruction of a wrecks formation process can be a complicated procedure that needs to take into account many interrelated components. The team of the University of Cyprus investigating the 4th-century BC Mazotos shipwreck are unable to interact easily and intuitively with the recorded data, a fact that impedes visualization and reconstruction and subsequently delays the evaluation of their hypotheses. An immersive 3D visualization application that utilizes a VR CAVE was developed, with the intent to enable researchers to mine the wealth of information this ancient shipwreck has to offer. Through the implementation and evaluation of the proposed application, this research seeks to investigate whether such an environment can aid the interpretation and analysis process and ultimately serve as an additional scientific tool for underwater archaeology.


ieee virtual reality conference | 2009

Automatic Creation of Massive Virtual Cities

Charalambos Poullis; Suya You

This research effort focuses on the historically-difficult problem of creating large-scale (city size) scene models from sensor data, including rapid extraction and modeling of geometry models. The solution to this problem is sought in the development of a novel modeling system with a fully automatic technique for the extraction of polygonal 3D models from LiDAR (Light Detection And Ranging) data. The result is an accurate 3D model representation of the real-world as shown in Figure 1. We present and evaluate experimental results of our approach for the automatic reconstruction of large U. S. cities.


workshop on applications of computer vision | 2008

A Vision-Based System For Automatic Detection and Extraction Of Road Networks

Charalambos Poullis; Suya You; Ulrich Neumann

In this paper we present a novel vision-based system for automatic detection and extraction of complex road networks from various sensor resources such as aerial photographs, satellite images, and LiDAR. Uniquely, the proposed system is an integrated solution that merges the power of perceptual grouping theory (Gabor filtering, tensor voting) and optimized segmentation techniques (global optimization using graph-cuts) into a unified framework to address the challenging problems of geospatial feature detection and classification. Firstly, the local precision of the Gabor filters is combined with the global context of the tensor voting to produce accurate classification of the geospatial features. In addition, the tensorial representation used for the encoding of the data eliminates the need for any thresholds, therefore removing any data dependencies. Secondly, a novel orientation-based segmentation is presented which incorporates the classification of the perceptual grouping, and results in segmentations with better defined boundaries and continuous linear segments. Finally, a set of Gaussian-based filters are applied to automatically extract centerline information (magnitude, width and orientation). This information is then used for creating road segments and then transforming them to their polygonal representations.


international conference on learning and collaboration technologies | 2014

User Experience Observations on Factors That Affect Performance in a Road-Crossing Training Application for Children Using the CAVE

Aimilia Tzanavari; Skevi Matsentidou; Chris G. Christou; Charalambos Poullis

Each year thousands of pedestrian get killed in road accidents and millions are non-fatally injured. Many of these involve children and occur when crossing at or between intersections. It is more difficult for children to understand, assess and predict risky situations, especially in settings that they don’t have that much experience in, such as in a city. Virtual Reality has been used to simulate situations that are too dangerous to practice in real life and has proven to be advantageous when used in training, aiming at improving skills. This paper presents a road-crossing application that simulates a pedestrian crossing found in a city setting. Children have to evaluate all given pieces of information (traffic lights, cars crossing, etc.) and then try to safely cross the road in a virtual environment. A VR CAVE is used to immerse children in the city scene. User experience observations were made so as to identify the factors that seem to affect children’s performance. Results indicate that the application was well received as a learning tool and that gender; immersion and traffic noise seem to affect children’s performance.

Collaboration


Dive into the Charalambos Poullis's collaboration.

Top Co-Authors

Avatar

Suya You

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Kyriakos Herakleous

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ulrich Neumann

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Chris G. Christou

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar

Skevi Matsentidou

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Constantinos Terlikkas

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar

Irene Katsouri

Cyprus University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge