Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sandro Spina is active.

Publication


Featured researches published by Sandro Spina.


international colloquium on grammatical inference | 2004

Mutually Compatible and Incompatible Merges for the Search of the Smallest Consistent DFA

John Abela; François Coste; Sandro Spina

State Merging algorithms, such as Rodney Price’s EDSM (Evidence-Driven State Merging) algorithm, have been reasonably successful at solving DFA-learning problems. EDSM, however, often does not converge to the target DFA and, in the case of sparse training data, does not converge at all. In this paper we argue that is partially due to the particular heuristic used in EDSM and also to the greedy search strategy employed in EDSM. We then propose a new heuristic that is based on minimising the risk involved in making merges. In other words, the heuristic gives preference to merges, whose evidence is supported by high compatibility with other merges. Incompatible merges can be trivially detected during the computation of the heuristic. We also propose a new heuristic limitation of the set of candidates after a backtrack to these incompatible merges, allowing to introduce diversity in the search.


international conference on virtual reality | 2011

Point cloud segmentation for cultural heritage sites

Sandro Spina; Kurt Debattista; Keith Bugeja; Alan Chalmers

Over the past few years, the acquisition of 3D point information representing the structure of real-world objects has become common practice in many areas. This is particularly true in the Cultural Heritage (CH) domain, where point clouds reproducing important and usually unique artifacts and sites of various sizes and geometric complexities are acquired. Specialized software is then usually used to process and organise this data. This paper addresses the problem of automatically organising this raw data by segmenting point clouds into meaningful subsets. This organisation over raw data entails a reduction in complexity and facilitates the post-processing effort required to work with the individual objects in the scene. This paper describes an efficient two-stage segmentation algorithm which is able to automatically partition raw point clouds. Following an intial partitioning of the point cloud, a RanSaC-based plane fitting algorithm is used in order to add a further layer of abstraction. A number of potential uses of the newly processed point cloud are presented; one of which is object extraction using point cloud queries. Our method is demonstrated on three point clouds ranging from 600K to 1.9M points. One of these point clouds was acquired from the pre-historic temple of Mnajdra consistsing of multiple adjacent complex structures.


eurographics workshop on parallel graphics and visualization | 2014

Collaborative high-fidelity rendering over peer-to-peer networks

Keith Bugeja; Kurt Debattista; Sandro Spina; Alan Chalmers

Due to the computational expense of high-fidelity graphics, parallel and distributed systems have frequently been employed to achieve faster rendering times. The form of distributed computing used, with a few exceptions such as the use of GRID computing, is limited to dedicated clusters available to medium to large organisations. Recently, a number of applications have made use of shared resources in order to alleviate costs of computation. Peer-to-peer computing has arisen as one of the major models for off-loading costs from a centralised computational entity to benefit a number of peers participating in a common activity. This work introduces a peer-to-peer collaborative environment for improving rendering performance for a number of peers where the program state, that is the result of some computation among the participants, is shared. A peer that computes part of this state shares it with the others via a propagation mechanism based on epidemiology. In order to demonstrate this approach, the traditional Irradiance Cache algorithm is extended to account for sharing over a network within the presented collaborative framework introduced. Results, which show an overall speedup with little overheads, are presented for scenes in which a number of peers navigate shared virtual environments.


The Visual Computer | 2018

An asynchronous method for cloud-based rendering

Keith Bugeja; Kurt Debattista; Sandro Spina

Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients.


Computer Graphics Forum | 2018

Frame Rate vs Resolution: A Subjective Evaluation of Spatiotemporal Perceived Quality Under Varying Computational Budgets

Kurt Debattista; Keith Bugeja; Sandro Spina; Thomas Bashford-Rogers; Vedad Hulusic

Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available . Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (fps) at lower resolutions over 30 fps at higher resolutions. Experiment 2 (n = 24) explores the relationship further with more budgets and quality settings and again finds 60 fps is generally preferred even when more resources are available. Experiment 3 (n = 25) permits the use of adaptive frame rates, and analyses the resource allocation across seven budgets. Results show that while participants allocate more resources to frame rate at lower budgets the situation reverses once higher budgets are available and a frame rate of around 40 fps is achieved. In the overall, the results demonstrate a complex relationship between frame rate and resolutions effects on perceived quality. This relationship can be harnessed, via the results and models presented, to obtain more cost‐effective virtual experiences.


2017 9th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games) | 2017

Telemetry-based optimisation for user training in racing simulators

Keith Bugeja; Sandro Spina; Francois Buhagiar

Motorsports require training and dedication to master, supplemented by hours of rote learning and mentoring by experts. This study explores the question of whether a serious game is a powerful enough pedagogical tool to be gainfully employed in the training of race drivers. A system of heuristics is proposed for a novel telemetry-based feedback model for contextual real-time suggestions. The model has been integrated into a racing simulation game and a study of its performance is reported here. The study consists of 27 participants, partitioned into two groups, to provide a control for the experiment. Two questionnaires have been used to acquire demographic information about the participants and help control for factors such as experience. Quantitative results show that there is an improvement for the group using the feedback system, although this improvement dissipates when the feedback is disabled again for the experimental group. Analysis of the initial results are encouraging, with the model showing promise. Additionally, the lack of cognitive retention on behalf of the participants when feedback was disabled merits further investigation and future work.


international conference on games and virtual worlds for serious applications | 2014

GPU-Based Selective Sparse Sampling for Interactive High-Fidelity Rendering

Steven Galea; Kurt Debattista; Sandro Spina

Physically-based renderers can produce highly realistic imagery; however such methods suffer from lengthy execution times, which make them impractical for use in interactive applications. Selective rendering exploits limitations in the human visual system to render images that are perceptually similar to high-fidelity renderings, in a fraction of the time. In this paper, we describe a novel GPU-based selective rendering algorithm that uses density of indirect lighting samples on the image plane as a selective variable. A high-speed saliency-guided mechanism is used to sample and evaluate a set of representative pixels locations on the image plane, yielding a sparse representation of indirect lighting in the scene. An image inpainting algorithm is used to reconstruct a dense representation of the indirect lighting component, which is then combined with the direct lighting component to produce the final rendering. Experimental evaluation demonstrates that our selective rendering algorithm achieves a good speedup when compared to standard interleaved sampling, and is significantly faster than a traditional GPU-based high-fidelity renderer.


international conference on games and virtual worlds for serious applications | 2014

High-Fidelity Graphics for Dynamically Generated Environments Using Distributed Computing

Keith Bugeja; Kurt Debattista; Sandro Spina; Alan Chalmers

In many serious games and applications, increasing visual fidelity enhances immersion. While physically-based renderers can produce highly realistic imagery by correctly simulating light propagation, these are impractical in the context of interactive applications due to lengthy computations. For real-time applications, several rasterisation-based techniques have been developed to augment the visual realism in a scene, such as the inclusion of shadows and ambient occlusion. These techniques however, come at the cost of additional compute resources from GPUs. This paper describes a novel technique which takes advantage of the distributed computational resources available at peers participating in a serious game to precompute the light distribution in the virtual environment. This additional information is then included within the vertex geometry of the scene and used at no extra costs whilst rendering. Experimental evaluation shows that the rendered virtual environments are much more realistic and more closely match reference imagery generated by physically-based renderers.


Archive | 2014

Scene Segmentation and Understanding for Context-Free Point Clouds

Sandro Spina; Kurt Debattista; Keith Bugeja; Alan Chalmers


TPCG | 2013

Acquisition, Representation and Rendering of Real-World Models using Polynomial Texture Maps in 3D

Elaine Vassallo; Sandro Spina; Kurt Debattista

Collaboration


Dive into the Sandro Spina's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge