Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alessandro Amoroso is active.

Publication


Featured researches published by Alessandro Amoroso.


international conference on supercomputing | 1992

Paralex: an environment for parallel programming in distributed systems

Ozalp Babaoglu; Lorenzo Alvisi; Alessandro Amoroso; Renzo Davoli; Luigi Alberto Giachini

Modern distributed systems consisting of powerful workstations and high-speed interconnection networks are an economical alternative to special-purpose super computers. The technical issues that need to be addressed in exploiting the parallelism inherent in a distributed system include heterogeneity, high-latency communication, fault tolerance and dynamic load balancing. Current software systems for parallel programming provide little or no automatic support towards these issues and require users to be experts in fault-tolerant distributed computing. The Paralex system is aimed at exploring the extent to which the parallel application programmer can be liberated from the complexities of distributed systems. Paralex is a complete programming environment and makes extensive use of graphics to define, edit, execute and debug parallel scientific applications. All of the necessary code for distributing the computation across a network and replicating it to achieve fault tolerance and dynamic load balancing is automatically generated by the system. In this paper we give an overview of Paralex and present our experiences with a prototype implementation.


Computer Networks | 2011

Going Realistic and Optimal: A Distributed Multi-Hop Broadcast Algorithm for Vehicular Safety

Alessandro Amoroso; Gustavo Marfia; Marco Roccetti

A subject of great interest and important investments by governments, navigation system companies and street management authorities is highway safety. In this context, an important role is played by applications designed to warn drivers of upcoming dangers. An example is vehicular accident warning systems, which advertise accident events to approaching vehicles. The effectiveness of currently in use vehicular accident warning systems can be jeopardized by their: (a) inability to provide an accident warning to the closest approaching vehicles; and, (b) high delays in advertising an event. In fact, such systems are unable to reach the vehicles that are closest to an accident site due to the absence of any deployed automatic detection and broadcast mechanisms. The future deployment of Vehicular Ad hoc Networks (VANETs) can fill this gap. By leveraging on the distributed nature of ad hoc networks, accident warning systems can rapidly alert the vehicles which most risk their involvement in a crash. To reach this goal, VANET-based accident warning systems require the design of efficient broadcast algorithms. A number of solutions have been proposed in the past few years. However, no such proposals, to the best of our knowledge, assume realistic wireless propagation scenarios. The scope of this paper is to present an optimal distributed algorithm, working at the application layer, for the broadcast of safety messages in VANETs. Optimality, in terms of delay, is achieved in unidimensional highway scenarios and under realistic wireless propagation assumptions. This is the only algorithm, to this date, capable of reaching all vehicles with the minimum number of transmissions within a realistic setting.


IEEE Transactions on Vehicular Technology | 2013

Safe Driving in LA: Report from the Greatest Intervehicular Accident Detection Test Ever

Gustavo Marfia; Marco Roccetti; Alessandro Amoroso; Giovanni Pau

The UN Economic Commissions Statistics of Road Traffic Accidents report of 2011 shows that every year, about 150 000 human beings lose their lives on the roads of the western world. Although it is a common belief that this figure could shrink with the use of new sensor and communication technologies, unfortunately, none such systems have hit the road to date. Ideally, if such technologies were put into place, vehicles could be part of a vehicular ad hoc network (VANET) capable of spreading relevant information about dangerous events (e.g., car accidents) to all approaching drivers. However, all this is mainly supported by simulation studies, as no practical results have been published to date, revealing the effective performances of such systems at work. In this paper, we fill this gap, presenting a detailed description of the greatest experiments (a few thousand throughout the streets of Los Angeles), to date, ever performed with an accident warning system specifically devised for highway scenarios. In particular, among all the possible candidate schemes, we ran a few thousand experiments with the accident warning system algorithm that was proven to be optimal in terms of bandwidth usage and covered distance in realistic scenarios. Our experiments confirm what has been observed before in theory and simulation, i.e., the use of such a system can reduce, by as much as 40%, the amount of vehicles involved in highway pileups.


acm sigops european workshop | 1996

NILE: wide-area computing for high energy physics

Keith Marzullo; Michael Ogg; Aleta Ricciardi; Alessandro Amoroso; F. Andrew Calkins; Eric Rothfus

The CLEO project [2], centered at Cornell University, is alarge-scale high energy physics project. The goals of the projectarise from an esoteric question---why is there apparently so littleantimatter in the universe?---and the computational problems thatarise in trying to answer this question are quite challenging. To answer this question, the CESR storage ring at Cornell isused to generate a beam of electrons directed at an equally strongbeam of positrons. These two beams meet inside a detector that isembedded in a magnetic field and is equipped with sensors. Thecollisions of electrons and positrons generate several secondarysubatomic particles. Each collision is called an event andis sensed by detecting charged particles (via the ionization theyproduce in a drift chamber) and neutral particles (in the case ofphotons, via their deposition of energy in a crystal calorimeter),as well as by other specialized detector elements. Most events areignored, but some are recorded in what is called raw data(typically 8Kbytes per event). Offline, a second program calledpass2 computes, for each event, the physical properties ofthe particles, such as their momenta, masses, and charges. Thiscompute-bound program produces a new set of records describing theevents (now typically 20Kbytes per event). Finally, a third programreads these events, and produces a lossily-compressed version ofonly certain frequently-accessed fields, written in what is calledroar format (typically 2Kbytes per event). The physicists analyze this data with programs that are, for themost part, embarrassingly parallel and I/O limited. Such programstypically compute a result based on a projection of a selection ofa large number of events, where the result is insensitive to theorder in which the events are processed. For example, a program mayconstruct histograms, or compute statistics, or cull the rawdata for physical inspection. The projection is either the completepass2 record or (much more often) the smaller roarrecord, and the selection is done in an ad-hoc manner by theprogram itself. Other programs are run as well. For example, a Monte Carlosimulation of the experiment is also run (called montecarlo) in order to correct the data for detector acceptance andinefficiencies, as well as testing aspects of the model used tointerpret the data. This program is compute bound. Anotherimportant example is called recompress. Roughly every twoyears, improvements in detector calibration and reconstructionalgorithms make it worthwhile to recompute more accuratepass2 data (and hence, more accurate roar data) fromall of the raw data. This program is compute-bound (itcurrently requires 24 200-MIP workstations running flat out forthree months) and so must be carefully worked into the schedule sothat it does not seriously impact the ongoing operations. Making this more concrete, the current experiment generatesapproximately 1 terabyte of event data a year. Only recentroar data can be kept on disk; all other data must reside ontape. The data processing demands consume approximately 12,000SPECint92 cycles a year. Improvements in the performance of CESRand the sensitivity of the detector will cause both of these valuesto go up by a factor of ten in the next few years, which willcorrespondingly increase the storage and computational needs by afactor of ten. The CLEO project prides itself on being able to do big scienceon a tight budget, and so the programming environment that the CLEOproject provides for researchers is innovative but somewhatprimitive. Jobs that access the entire data set can take days tocomplete. To circumvent limited access to tape, the network, orcompute resources close to the central disk, physicists often dopreliminary selections and projections (called skims) tocreate private disk data sets of events for further local analysis.Limited resources usually exact a high human price for resource andjob management and ironically, can sometimes lead toinefficiencies. Given the increase in data storage, data retrieval,and computational needs, it has become clear that the CLEOphysicists require a better distributed environment in which to dotheir work. Hence, an NSF-funded National Challenge project was started withparticipants from both high energy physics, distributed computing,and data storage, in order to provide a better environment for theCLEO experiment. The goals of this project, called NILE [7], are: to build a scalable environment for storing and processing HighEnergy Physics data from the CLEO experiment. The environment mustscale to allow 100 terabytes or more of data to be addressable, andto be able to use several hundreds of geographically dispersedprocessors.to radically decrease the processing time of computationsthrough parallelism.to be practicable. NILE, albeit in a limited form,should be deployed very soon, and evolve to its full form by theend of the project in June 1999.Finally, the CLEO necessity of building on a budget carries overto NILE. There aresome more expensive resources, such as ATM switches and tape silos,that it will be necessary to use. However, as far as possible weare using commodity equipment, and free or inexpensive softwarewhenever possible. For example, one of our principal developmentplatforms is Pentium-based PCs, interconnected with 100 MbpsEthernet, running Linux and the GNU suite of tools.


annual european computer conference | 1991

Mapping parallel computations onto distributed systems in Paralex

Ozalp Babaoglu; Lorenzo Alvisi; Alessandro Amoroso; Renzo Davoli

Paralex is a programming environment that allows parallel programs to be developed and executed on distributed systems as if the latter were uniform parallel multiprocessor computers. Architectural heterogeneity, remote communication and failures are rendered transparent to the programmer through automatic system support. The authors address the problems of initial mapping and dynamic alteration of the association between parallel computation components and distributed hosts. Results include novel heuristics and mechanisms to resolve these problems despite the complexities introduced by architectural heterogeneity fault tolerance.<<ETX>>


cognitive radio and advanced spectrum management | 2011

Cognitive cars : constructing a cognitive playground for VANET research testbeds

Gustavo Marfia; Marco Roccetti; Alessandro Amoroso; Mario Gerla; Giovanni Pau; Jae-Han Lim

Simulation today plays a key role in the study and understanding of extremely complex systems, which range from transportation networks to virus spread, and include large-scale vehicular ad hoc networks (VANETs). Regarding VANET scenarios, until very recently, simulation has represented the only tool with which it was possible to estimate and compare the performances of different communication protocols. In fact, it was not possible to thoroughly test on the road any VANET-based multi-hop communication system, as no highly dense vehicular testbed exists to this date. This situation has recently changed, with the introduction of a new COGNITIVE approach to VANET systems research, where it has been shown that it is possible to perform realistic experiments using only a few real vehicular resources (i.e., only a few vehicles that are equipped with wireless communication interfaces). Now, the scope of this paper is to show that it is possible to move further ahead along this recently drawn path, utilizing the features provided by cognitive network technologies. In particular, we will show that cognitive interfaces can play a role as an additional tunable dimension to be used within an experimental platform where highly dense vehicular testbeds can be structured, even in the presence of a few real vehicular resources. The advantage is twofold: (a) they can be used to test new strategies for dealing with the scarcity of spectrum in a very dynamic environment as the vehicular one is, and, (b) they can be used to test the performances of VANET protocols as a function of different frequencies and interface switching delays. As an example of how this can be done, we will provide preliminary results from a set of experiments that have been performed with a highway accident warning system and with a cognitive network based on the Microsoft Software Radio (SORA) technology.


international wireless internet conference | 2008

The farther relay and oracle for VANET. preliminary results

Alessandro Amoroso; Marco Ciaschini; Marco Roccetti

We present a novel protocol for fast multi-hop message propagation in the scenario of ad hoc vehicular networks (VANET). Our approach has been designed to gain optimal performance in scenarios that are very likely, but not common in literature. FROV faces asymmetric communications and varying transmission ranges. In this scenario it is able to broadcast any message with the minimal number of hops. Moreover, our proposal is scalable with respect to the number of participating vehicles, and tolerates vehicles that leave or join the platoon. At the current state of development, our protocol is optimal in the case of unidimensional roads and we are studying its extension to a web of urban roads. This paper presents the preliminary results of simulations carried out to verify the feasibility of our proposal.


IEEE Transactions on Parallel and Distributed Systems | 1996

Parallel computing in networks of workstations with Paralex

Renzo Davoli; Luigi-Alberto Giachini; O. Bebaoglu; Alessandro Amoroso; Lorenzo Alvisi

Modern distributed systems consisting of powerful workstations and high-speed interconnection networks are an economical alternative to special-purpose supercomputers. The technical issues that need to be addressed in exploiting the parallelism inherent in a distributed system include heterogeneity, high-latency communication, fault tolerance and dynamic load balancing. Current software systems for parallel programming provide little or no automatic support towards these issues and require users to be experts in fault-tolerant distributed computing. The Paralex system is aimed at exploring the extent to which the parallel application programmer can be liberated from the complexities of distributed systems. Paralex is a complete programming environment and makes extensive use of graphics to define, edit, execute, and debug parallel scientific applications. All of the necessary code for distributing the computation across a network and replicating it to achieve fault tolerance and dynamic load balancing is automatically generated by the system. In this paper we give an overview of Paralex and present our experiences with a prototype implementation.


consumer communications and networking conference | 2012

Creative testbeds for VANET research: A new methodology

Alessandro Amoroso; Gustavo Marfia; Marco Roccetti; Giovanni Pau

The ever-increasing processing power that can today support large scale and detailed simulations increased the depth of the research carried out on protocols and apps developed for single hop and multi-hop Vehicular Ad Hoc Network (VANET) environments. It is now possible, for example, to verify the effectiveness of peer-to-peer one-hop file exchange protocols between vehicles, while taking into account the effects that buildings have on point-to-point transmissions at street intersections, or also verify how a high density of vehicles can impact the transfer of multimedia information through multiple hops between passengers involved into an online game. However, what has not been possible so far, for the obvious reason that no highly dense VANETs in reality exists, is to effectively test any type of application or communication protocol in a real setting, especially for the scenarios concerned with multi-hop communications. But this may change, with the introduction of a creative approach to VANET research: we will here describe how it is possible to experiment with applications and protocols in scenarios that are close to reality, by simply using a few real vehicle resources. As an example of how this can be done, we will provide preliminary results from a set of experiments on a vehicular highway accident warning system, results that would have not been observable in reality without the adoption of our creative methodology.


international conference on distributed computing systems | 1998

Wide-area Nile: a case study of a wide-area data-parallel application

Alessandro Amoroso; Keith Marzullo; Aleta Ricciardi

The Nile system is a distributed environment for running very large, data-intensive applications across a network of commodity workstations. These applications process data from elementary particle collisions, generated by the Cornell Electron Storage Ring, and are used by physicists of the CLEO experiment. The applications have a simple data-parallel structure, and so Nile executes them using as much parallelism as is available. Nile currently runs at any single site. It is being used by alpha testers and is scheduled for beta release in March 1998. We describe how we are adapting this local-area Nile system to allow for wide-area, multiple site interactions. In particular, we consider the two problems of scaling and of fault tolerance.

Collaboration


Dive into the Alessandro Amoroso's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Alvisi

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giovanni Pau

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge