Anooshiravan Saboori
University of Illinois at Urbana–Champaign
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anooshiravan Saboori.
conference on decision and control | 2007
Anooshiravan Saboori; Christoforos N. Hadjicostis
In this paper, we follow a state-based approach to extend the notion of opacity in computer security to discrete event systems. A system is (S, P)-opaque if the evolution of its true state through a set of secret states S remains opaque to an observer who is observing activity in the system through the projection map P. In other words, based on observations through the mapping P, the observer is never certain that the current state of the system is within the set of secret states S. We also introduce the stronger notion of (S,P, K)-opacity which requires opacity to remain true for K observations following the departure of the systems state from the set S. We show that the state-based definition of opacity enables the use of observer constructions for verification purposes. In particular, the verification of (S,P, K)-opacity is accomplished via an observer with K-delay which is constructed to capture state estimates with K-delay. These are the estimates of the state of the system K observations ago and are consistent with all observations (including the last K observations). We also analyze the properties and complexity of the observer with K- delay.
international workshop on discrete event systems | 2008
Anooshiravan Saboori; Christoforos N. Hadjicostis
Motivated by security applications where the initial state of a system needs to be kept secret (opaque) to outside observers (intruders), we formulate, analyze and verify the notion of initial-state opacity in discrete event systems. Specifically, a system is initial-state opaque if the membership of its true initial state to a set of secret states remains opaque to an intruder who is modeled as an observer of the system activity through some projection map. In other words, based on observations through this map, the observer is never certain that the initial state of the system is within the set of secret states. To verify initial-state opacity, we address the initial-state estimation problem in discrete event systems via the construction of an initial-state estimator. This estimator captures estimates of the initial state of the system which are consistent with all observations obtained so far. We also analyze the properties and complexity of the initial-state estimator.
IEEE Transactions on Automation Science and Engineering | 2011
Anooshiravan Saboori; Christoforos N. Hadjicostis
In this paper, we analyze the verification of K-step opacity in discrete event systems that are modeled as (possibly non-deterministic) finite automata with partial observation on their transitions. A system is K-step opaque if the entrance of the system state within the last K observations to a set of secret states remains opaque to an intruder who has complete knowledge of the system model and observes system activity through some projection map. We establish that the verification of K-step opacity is NP-hard. We also investigate the role of delay K in K-step opacity and show that there exists a delay K* such that K-step opacity and K′-step opacity become equivalent for K′ ≫ K ≥ K*.
Systems & Control Letters | 2006
Anooshiravan Saboori; Shahin Hashtrudi Zad
In this paper, we extend previous results on robust nonblocking supervisory control of discrete-event systems with full observation to the case of control under partial observation where only a subset of events are observable and can be monitored. The exact model of the plant is not known but is assumed to be among a finite set of possible models. For each plant model a legal marked behavior is assumed given. We manage to characterize the entire set of solutions of the robust control problem and obtain a set of necessary and sufficient conditions for the existence of a solution for the robust control problem
IEEE Transactions on Automatic Control | 2012
Anooshiravan Saboori; Christoforos N. Hadjicostis
State-based notions of opacity, such as initial-state opacity and infinite-step opacity, emerge as key properties in numerous security applications of discrete event systems. We consider systems that are modeled as partially observed nondeterministic finite automata and tackle the problem of constructing a minimally restrictive opacity-enforcing supervisor (MOES), which limits the systems behavior within some prespecified legal behavior while enforcing initial-state opacity or infinite-step opacity requirements. We characterize the solution to MOES, under some mild assumptions, in terms of the supremal element of certain controllable, normal, and opaque languages. We also show that this supremal element always exists and that it can be implemented using state estimators. The result is a supervisor that achieves conformance to the pre-specified legal behavior while enforcing initial-state opacity by disabling, at any given time, a subset of the controllable system events, in a way that minimally restricts the range of allowable system behavior. Although infinite-step opacity cannot be easily translated to language-based opacity, we show that, by using a finite bank of supervisors, the aforementioned approach can be extended to enforce infinite-step opacity in a minimally restrictive way.
IEEE Transactions on Automatic Control | 2012
Anooshiravan Saboori; Christoforos N. Hadjicostis
We describe and analyze the complexity of verifying the notion of infinite-step opacity in systems that are modeled as non-deterministic finite automata with partial observation on their transitions. Specifically, a system is infinite-step opaque if the entrance of the system state, at any particular instant, to a set of secret states remains opaque (uncertain), for the length of the system operation, to an intruder who observes system activity through some projection map. Infinite-step opacity can be used to characterize the security requirements in many applications, including encryption using pseudo-random generators, coverage properties in sensor networks, and anonymity requirements in protocols for web transactions. We show that infinite-step opacity can be verified via the construction of a set of appropriate initial state estimators and provide illustrative examples. We also establish that the verification of infinite-step opacity is a PSPACE-hard problem.
international conference on distributed computing systems | 2008
Anooshiravan Saboori; Guofei Jiang; Haifeng Chen
Distributed systems usually have many configurable parameters such as those included in common configuration files. Performance of distributed systems is partially dependent on these system configurations. While operators may choose default settings or manually tune parameters based on their experience and intuition, the resulted settings may not be the optimal one for specific services running on the distributed system. In this paper, we formulate the problem of autotuning configurations as a black-box optimization problem. This problem becomes quite challenging since the joint parameter search space is huge and also no explicit relationship between performance and configurations exists. We propose to use a well known evolutionary algorithm called covariance matrix adaptation (CMA) to automatically tune system parameters. We compare CMA algorithm to another existing techniques called smart hill climbing (SHC) and demonstrate that CMA algorithm outperforms SHC algorithm both on synthetic data and in a real system.
IEEE Transactions on Automatic Control | 2014
Anooshiravan Saboori; Christoforos N. Hadjicostis
A system is said to be current-state opaque if the entrance of the system state to a set of secret states remains opaque (uncertain) to an intruder-at least until the system leaves the set of secret states. This notion of opacity has been studied in nondeterministic finite automata settings (where the intruder observes a subset of events, for example, via some natural projection mapping) and has been shown to be useful in characterizing security requirements in many applications (including encryption using pseudorandom generators and coverage properties in sensor networks). One limitation of the majority of existing analysis is that it fails to provide a quantifiable measure of opacity for a given system; instead, it simply provides a binary characterization of the system (being opaque or not opaque). In this paper, we address this limitation by extending current-state opacity formulations to systems that can be modeled as probabilistic finite automata under partial observation. We introduce three notions of opacity, namely: 1) step-based almost current-state opacity; 2) almost current-state opacity; and 3) probabilistic current-state opacity, all of which can be used to provide a measure of a given systems opacity. We also propose verification methods for these probabilistic notions of opacity and characterize their corresponding computational complexities.
conference on decision and control | 2008
Anooshiravan Saboori; Christoforos N. Hadjicostis
Initial-state opacity emerges as a key property in numerous security applications of discrete event systems including key-stream generators for cryptographic protocols. Specifically, a system is initial-state opaque if the membership of its true initial state to a set of secret states remains uncertain (opaque) to an outside intruder who observes system activity through a given projection map. In this paper, we consider the problem of constructing a minimally restrictive opacity-enforcing supervisor (MOES) which limits the system¿s behavior within some pre-specified legal behavior while enforcing the initial-state opacity requirement. To tackle this problem, we extend the state-based definition of initial-state opacity to languages and characterize the solution to MOES in terms of the supremal element of certain controllable, observable and opaque languages. We also derive conditions under which this supremal element exists and show how the initial-state estimator, which was introduced in our earlier work for verifying initial-state opacity, can be used to implement the solution to MOES.
conference on decision and control | 2010
Anooshiravan Saboori; Christoforos N. Hadjicostis
Motivated by security and privacy considerations in applications of discrete event systems, various notions of opacity have been introduced. Specifically, a system is said to be current-state opaque if the entrance of the system state to a set of secret states remains opaque (uncertain) to an intruder — at least until the system leaves the set of secret states. This notion, which has been studied in non-deterministic finite automaton settings where the intruder observes a subset of events, has been shown to be useful in characterizing security requirements in many applications (including encryption using pseudo-random generators and trajectory coverage of a mobile agent in sensor networks). One limitation of these existing approaches is that they fail to provide a quantifiable measure for characterizing the degree of opacity of a given system. In this paper, we partially address this limitation by extending this framework to systems that can be modeled as probabilistic finite automata, characterizing in the process the probability of observing a violation of current-state opacity. We introduce the notion of step-based almost current-state opacity which provides a measure of opacity for a given system. We also propose a verification method for this probabilistic notion of opacity and characterize its computational complexity.