Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Francesco Brancati is active.

Publication


Featured researches published by Francesco Brancati.


IEEE Transactions on Instrumentation and Measurement | 2012

Experimental Characterization of Uncertainty Sources in a Software-Only Synchronization System

Paolo Ferrari; Alessandra Flammini; Stefano Rinaldi; Andrea Bondavalli; Francesco Brancati

Over the past few decades, centralized computing systems have been replaced by decentralized systems, consisting of simple nodes interconnected by a communication network. The time reference of the distributed nodes must be synchronized in order to be able to coordinate the operation or compare the data collected by the different nodes. Several synchronization protocols have been developed to be used instead of global positioning system or dedicated synchronization systems. For example, the network time protocol (NTP) is a very popular synchronization protocol used to synchronize computers over wide area networks, like the Internet. In addition, IEEE 1588, a synchronization protocol dedicated to high-performance synchronization in local networks, was recently established. However, submicroseconds synchronization can be obtained only using dedicated hardware devices, thereby increasing the cost of each node. For these reasons, PTP software-only implementations are quite common in real systems, with a resulting synchronization uncertainty varying from a few to hundreds of microseconds and quite burdensome to estimate or measure. The work presented in this paper focuses on the analysis of the major uncertainty contributions in a software-only implementation. Particularly, a careful analysis of timestamp mechanism and time management in a PC platform is carried out. In addition, a method for the experimental evaluation of uncertainty contributions is proposed. A test case based on a software implementation of the IEEE 1588 (the so-called PTPd) is presented. The experimental tests presented in the paper highlight that the main uncertainty source of a software-only synchronization approach is the timestamp method. Timestamping accuracy can be affected by the computational load of the node itself: in normal conditions, the maximum uncertainty introduced by the timestamp mechanism is in the order of 10 , but in case of high computational load, it can raise up to 224 μs.


international symposium on precision clock synchronization for measurement control and communication | 2009

Safe estimation of time uncertainty of local clocks

Andrea Bondavalli; Francesco Brancati; Andrea Ceccarelli

The Reliable and Self-Aware Clock (R&SAClock) is a new software clock aimed at providing resilient time information. It uses and exploits the information collected from any chosen clock synchronization mechanism to provide both the current time and the synchronization uncertainty, intended as a conservative and self-adaptive estimation of the distance from an external global time. This paper describes an algorithm that uses statistical analysis to compute the synchronization uncertainty with a given coverage. Simulations are presented that show the behavior of the algorithm and its effectiveness.


IEEE Transactions on Dependable and Secure Computing | 2015

Continuous and Transparent User Identity Verification for Secure Internet Services

Andrea Ceccarelli; Leonardo Montecchi; Francesco Brancati; Paolo Lollini; Angelo Marguglio; Andrea Bondavalli

Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.


IEEE Transactions on Instrumentation and Measurement | 2013

Master Failure Detection Protocol in Internal Synchronization Environment

Andrea Bondavalli; Francesco Brancati; Alessandra Flammini; Stefano Rinaldi

During the last decades, the wide advance in the networking technologies has allowed the development of distributed monitoring and control systems. These systems show advantages compared with centralized solutions: heterogeneous nodes can be easily integrated, new nodes can be easily added to the system, and no single point of failure. For these reasons, distributed systems have been adopted in different fields, such as industrial automation and telecommunication systems. Recently, due to technology improvements, distributed systems are also adopted in the control of power-grid and transport systems, i.e., the so-called large-scale complex critical infrastructures. Given the strict safety, security, reliability, and real-time requirements, using distributed systems for controlling such critical infrastructure demands that adequate mechanisms have to be established to share the same notion of time among the nodes. For this class of systems, a synchronization protocol, such as the IEEE 1588 standard, can be adopted. This type of synchronization protocol was designed to achieve very precise clock synchronization, but it may not be sufficient to ensure safety of the entire system. For example, instability of the local oscillator of a reference node, due to a failure of the node itself or to malicious attacks, could influence the quality of synchronization of all nodes. In recent years, a new software clock, the reliable and self-aware clock (R&SAClock), which is designed to estimate the quality of synchronization through statistical analysis, was developed and tested. This statistical instrument can be used to identify any anomalous conditions with respect to normal behavior. A careful analysis and classification of the main points of failure of IEEE 1588 standard suggests that the reference node, which is called master, is the weak point of the system. For this reason, this paper deals with the detection of faults of the reference node(s) of an of IEEE 1588 setup. This paper describes and evaluates the design of a protocol for timing failure detection for internal synchronization based on a revised version of the R&SAClock software suitably modified to cross-exploit the information on the quality of synchronization among all the nodes of the system. The experimental evaluation of this approach confirms the capability of the synchronization uncertainty, which is provided by R&SAClock, to reveal the anomalous behaviors either of the local node or of the reference node. In fact, it is shown that, through a proper configuration of the parameters of the protocol, the system is able to detect all the failures injected on the master in different experimental conditions and to correctly identify failures on slaves with a probability of 87%.


high assurance systems engineering | 2012

Design and Implementation of Real-Time Wearable Devices for a Safety-Critical Track Warning System

Andrea Ceccarelli; Andrea Bondavalli; Joao Figueiras; Boris Malinowsky; Jurij Wakula; Francesco Brancati; Carlo Dambra; Andrea Seminatore

Trackside railway workers can benefit of intelligent systems for automatic track warning, that are able to safely (i) detect trains or rolling stock approaching the worksite, and (ii) notify their arrival to the workers. The usage of wearable mobile devices to monitor workers positions and notify trains arrivals requires to face serious challenges mainly in terms of service timeliness, safety, security and ergonomics (this last one to define notification signals to the workers that are always perceivable). This paper presents the design and the prototype of the Mobile Terminal (MT), a wearable, real time, wireless, safety-critical device which exploits information received from track monitoring devices to inform a worker about trains or rolling stock approaching the worksite. The MT design concept is based on a hybrid architecture to favor the apportionment of different requirements, in terms of timing and security, to the different parts of the MT. Additionally, the MT includes novel solutions to interface with the worker, to realize an accurate localization service and to achieve safety-critical real-time communication.


service oriented software engineering | 2015

Towards an understanding of emergence in systems-of-systems

Hermann Kopetz; Oliver Höftberger; Bernhard Frömel; Francesco Brancati; Andrea Bondavalli

Emergence is a systemic phenomenon in an System-of-Systems (SoS) that cannot be reduced to the behavior of the isolated parts of a system. It is the objective of this paper to contribute to the understanding of emergent phenomena in SoSs. After a short look at the literature on emergence in the domains of philosophy and computer science, this paper continues with an elaboration on multi-level nearly-decomposable systems, gives a tentative definition of emergence and discusses how emergent behavior manifests itself in an SoS.


Operating Systems Review | 2014

Insider Threat Assessment: a Model-Based Methodology

Nicola Nostro; Andrea Ceccarelli; Andrea Bondavalli; Francesco Brancati

Security is a major challenge for todays companies, especially ICT ones which manage large scale cyber-critical systems. Amongst the multitude of attacks and threats to which a system is potentially exposed, there are insider attackers i.e., users with legitimate access which abuse or misuse of their power, thus leading to unexpected security violation (e.g., acquire and disseminate sensitive information). These attacks are very difficult to detect and mitigate due to the nature of the attackers, which often are companys employees motivated by socio-economical reasons, and to the fact that attackers operate within their granted restrictions. It is a consequence that insider attackers constitute an actual threat for ICT organizations. In this paper we present our methodology, together with the application of existing supporting libraries and tools from the state-of-the-art, for insider threats assessment and mitigation. The ultimate objective is to define the motivations and the target of an insider, investigate the likeliness and severity of potential violations, and finally identify appropriate countermeasures. The methodology also includes a maintenance phase during which the assessment can be updated to reflect system changes. As case study, we apply our methodology to the crisis management system Secure!, which includes different kinds of users and consequently is potentially exposed to a large set of insider threats.


instrumentation and measurement technology conference | 2011

Evaluation of timestamping uncertainty in a software-based IEEE1588 implementation

Paolo Ferrari; Alessandra Flammini; Stefano Rinaldi; Andrea Bondavalli; Francesco Brancati

The IEEE1588 became in the last years the de-facto solution to obtain a strict synchronization over a local area network. The purpose of the paper is to analyze the uncertainty sources that affect a software-only implementation of the IEEE1588 (the so called PTPd) based on off-the-shelf PC components. A careful analysis of timestamp mechanism and time management in a PC platform with an Operating System is carried out. The experimental setup presented in the paper highlights that the main uncertainty source of a software-only synchronization approach is the timestamp method. At the end of the paper, an index to evaluate the Timestamping uncertainty, called Timestamping Uncertainty Index (TUI) has been proposed and applied to the test case presented.


symposium on reliable distributed systems | 2010

Experimental Validation of a Synchronization Uncertainty-Aware Software Clock

Andrea Bondavalli; Francesco Brancati; Andrea Ceccarelli; Michele Vadursi

A software clock capable of self-evaluating its synchronization uncertainty is experimentally validated for a specific implementation on a node synchronized through NTP. The validation methodology takes advantage of an external node equipped with a GPS-synchronized clock acting as a reference, which is connected to the node hosting the system under test through a fast Ethernet connection. Experiments are carried out for different values of the software clock parameters and different types of workload, and address the possible occurrence of faults in the system under test and in the NTP synchronization mechanism. The validation methodology is designed to be as less intrusive as possible and to grant a resolution of the order of few hundreds of microseconds. The experimental results show very good performance of R&SAClock, and their analysis gives precious hints for further improvements.


2011 IEEE International Workshop on Measurements and Networking Proceedings (M&N) | 2011

Towards identifying OS-level anomalies to detect application software failures

Antonio Bovenzi; Stefano Russo; Francesco Brancati; Andrea Bondavalli

The next generation of critical systems, namely complex Critical Infrastructures (LCCIs), require efficient runtime management, reconfiguration strategies, and the ability to take decisions on the basis of current and past behavior of the system. Anomaly-based detection, leveraging information gathered at Operating System (OS) level (e.g., number of system call errors, signals, and holding semaphores in the time unit), seems to be a promising approach to reveal online application faults. Recently an experimental campaign to evaluate the performance of two anomaly detection algorithms was performed on a case study from the Air Traffic Management (ATM) domain, deployed under the popular OS used in the production environment, i.e., Red Hat 5 EL. In this paper we investigate the impact of the OS and the monitored resources on the quality of the detection, by executing experiments on Windows Server 2008. Experimental results allow identifying which of the two operating systems provides monitoring facilities best suited to implement the anomaly detection algorithms that we have considered. Moreover numerical sensitivity analysis of the detector parameters is carried out to understand the impact of their setting on the performance.

Collaboration


Dive into the Francesco Brancati's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michele Vadursi

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefano Russo

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar

Bernhard Frömel

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Diego Santoro

University of Naples Federico II

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge