Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ugo Erra is active.

Publication


Featured researches published by Ugo Erra.


ieee international conference on information visualization | 2003

VENNFS: a Venn-diagram file manager

R. De Chiara; Ugo Erra; Vittorio Scarano

We present a prototypal file manager, VENNFS, that is designed to overcome some of the limitations of the current desktop interfaces, that are strongly based on hierarchical file systems. VENNFS allows users to place documents and categories on a plane so that files may belong to multiple categories at once, where proximity on the plane can represent similarity and time filtering is allowed.


Ninth International Conference on Information Visualisation (IV'05) | 2005

Interactive 3D environments by using videogame engines

Roberto Andreoli; R. De Chiara; Ugo Erra; Vittorio Scarano

In this paper we study state-of-the-art technologies to design interactive and cooperative 3D environments that are based on videogame 3D engines. We provide, first, a categorization of videogame 3D engines from the point of view of their usage in creating interactive 3D worlds and show a comparison of the most important characteristics. Then, we show an example of how we used a commercial videogame engine to create an interactive an enjoyable visit to an archaeological site.


international symposium on visual computing | 2005

Toward real time fractal image compression using graphics hardware

Ugo Erra

In this paper, we present a parallel fractal image compression using the programmable graphics hardware. The main problem of fractal compression is the very high computing time needed to encode images. Our implementation exploits SIMD architecture and inherent parallelism of recently graphic boards to speed-up baseline approach of fractal encoding. The results we present are achieved on cheap and widely available graphics boards.


Universal Access in The Information Society | 2007

Personalizable edge services for Web accessibility

Ugo Erra; Gennaro Iaccarino; Delfina Malandrino; Vittorio Scarano

Web Content Accessibility guidelines by W3C (W3C Recommendation, May 1999. http://www.w3.org/TR/WCAG10/) provide several suggestions for Web designers regarding how to author Web pages in order to make them accessible to everyone. In this context, this paper proposes the use of edge services as an efficient and general solution to promote accessibility and breaking down the digital barriers that inhibit users with disabilities to actively participate to any aspect of society. The idea behind edge services mainly affect the advantages of a personalized navigation in which contents are tailored according to different issues, such as client’s devices capabilities, communication systems and network conditions and, finally, preferences and/or abilities of the growing number of users that access the Web. To meet these requirements, Web designers have to efficiently provide content adaptation and personalization functionalities mechanisms in order to guarantee universal access to the Internet content. The so far dominant paradigm of communication on the WWW, due to its simple request/response model, cannot efficiently address such requirements. Therefore, it must be augmented with new components that attempt to enhance the scalability, the performances and the ubiquity of the Web. Edge servers, acting on the HTTP data flow exchanged between client and server, allow on-the-fly content adaptation as well as other complex functionalities beyond the traditional caching and content replication services. These value-added services are called edge services and include personalization and customization, aggregation from multiple sources, geographical personalization of the navigation of pages (with insertion/emphasis of content that can be related to the user’s geographical location), translation services, group navigation and awareness for social navigation, advanced services for bandwidth optimization such as adaptive compression and format transcoding, mobility, and ubiquitous access to Internet content. This paper presents Personalizable Accessible Navigation (Pan) that is a set of edge services designed to improve Web pages accessibility, developed and deployed on top of a programmable intermediary framework. The characteristics and the location of the services, i.e., provided by intermediaries, as well as the personalization and the opportunities to select multiple profiles make Pan a platform that is especially suitable for accessing the Web seamlessly also from mobile terminals.


Information Sciences | 2015

Approximate TF-IDF based on topic extraction from massive message stream using the GPU

Ugo Erra; Sabrina Senatore; Fernando Minnella; Giuseppe Caggianese

The Web is a constantly expanding global information space that includes disparate types of data and resources. Recent trends demonstrate the urgent need to manage the large amounts of data stream, especially in specific domains of application such as critical infrastructure systems, sensor networks, log file analysis, search engines and more recently, social networks. All of these applications involve large-scale data-intensive tasks, often subject to time constraints and space complexity. Algorithms, data management and data retrieval techniques must be able to process data stream, i.e., process data as it becomes available and provide an accurate response, based solely on the data stream that has already been provided. Data retrieval techniques often require traditional data storage and processing approach, i.e., all data must be available in the storage space in order to be processed. For instance, a widely used relevance measure is Term Frequency-Inverse Document Frequency (TF-IDF), which can evaluate how important a word is in a collection of documents and requires to a priori know the whole dataset.To address this problem, we propose an approximate version of the TF-IDF measure suitable to work on continuous data stream (such as the exchange of messages, tweets and sensor-based log files). The algorithm for the calculation of this measure makes two assumptions: a fast response is required, and memory is both limited and infinitely smaller than the size of the data stream. In addition, to face the great computational power required to process massive data stream, we present also a parallel implementation of the approximate TF-IDF calculation using Graphical Processing Units (GPUs).This implementation of the algorithm was tested on generated and real data stream and was able to capture the most frequent terms. Our results demonstrate that the approximate version of the TF-IDF measure performs at a level that is comparable to the solution of the precise TF-IDF measure.


eurographics, italian chapter conference | 2007

Real Positioning in Virtual Environments Using Game Engines

Rosario De Chiara; Valentina Di Santo; Ugo Erra; Vittorio Scarano

Immersive virtual environments offer a natural setting for educational an d instructive experiences for users, and game engine technology offers an interesting, cost-effective and efficien t solution for building them. In this paper we describe an ongoing project whose goal is to provide a v irtual environment where the “real” location of the user is used to position the user’s avatar into the virtual enviro nment.


international conference on conceptual structures | 2012

Frequent Items Mining Acceleration Exploiting Fast Parallel Sorting on the GPU

Ugo Erra; Bernardino Frola

Abstract In this paper, we show how to employ Graphics Processing Units (GPUs) to provide an effcient and highperformance solution for finding frequent items in data streams. We discuss several design alternatives and present an implementation that exploits the great capability of graphics processors in parallel sorting. We provide an exhaustive evaluation of performances, quality results and several design trade-offs. Onanoff-the-shelf GPU, the fastest of our implementations can process over 200 million items per second, which is better than the best known solution based on Field Programmable Gate Arrays (FPGAs) and CPUs. Moreover, in previous approaches, performances are directly related to the skewness of the input data distribution, while in our approach, the high throughput is independent from this factor.


motion in games | 2010

BehaveRT: a GPU-based library for autonomous characters

Ugo Erra; Bernardino Frola; Vittorio Scarano

In this work, we present a GPU-based library, called Behave RT, for the definition, real-time simulation, and visualization of large communities of individuals. We implemented a modular flexible and extensible architecture based on a plug-in infrastructure that enables the creation of a behavior engine system core. We used Compute Unified Device Architecture to perform parallel programming and specific memory optimization techniques to exploit the computational power of commodity graphics hardware, enabling developers to focus on the design and implementation of behavioral models. This paper illustrates the architecture of BehaveRT, the core plug-ins, and some case studies. In particular, we show two high-level behavioral models, picture and shape flocking, that generate images and shapes in 3D space by coordinating the positions and color-coding of individuals. We, then, present an environment discretization case study of the interaction of a community with generic virtual scenes such as irregular terrains and buildings.


IET Software | 2010

Assessing communication media richness in requirements negotiation

Ugo Erra; Giuseppe Scanniello

A critical claim in software requirements negotiation regards the assertion that group performances improve when a medium with different richness level is used. Accordingly, the authors have conducted a study to compare traditional face-to-face communication, the richest medium and two less rich communication media, namely a distributed three-dimensional virtual environment and a text-based structured chat. This comparison has been performed with respect to the time needed to accomplish a negotiation. Furthermore, as the only assessment of the time could not be meaningful, the authors have also analysed the media effect on the issues arisen in the negotiation process and the quality of the negotiated software requirements.


conference on software maintenance and reengineering | 2013

Using the GPU to Green an Intensive and Massive Computation System

Giuseppe Scanniello; Ugo Erra; Giuseppe Caggianese; Carmine Gravino

In this paper, we present the early results of an ongoing project aimed at giving an existing software system a more eco-sustainable lease of life. We defined a strategy and a process for migrating a subject system that performs intensive and massive computation to a Graphics Processing Unit (GPU) based architecture. We validated our solutions on a software system for path finding robot simulations. An initial comparison on the energy consumption of the original system and the greened one has been also executed. The obtained results suggested that the application of our solution produced more eco-sustainable software.

Collaboration


Dive into the Ugo Erra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gennaro Cordasco

Seconda Università degli Studi di Napoli

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luca Pepe

University of Salerno

View shared research outputs
Researchain Logo
Decentralizing Knowledge