Reza Hajisheykhi
Michigan State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Reza Hajisheykhi.
Theoretical Computer Science | 2013
Ali Ebnenasir; Reza Hajisheykhi; Sandeep S. Kulkarni
Due to their increasing complexity, todays SoC (system on chip) systems are subject to a variety of faults (e.g., single-event upset, component crash, etc.), thereby making fault tolerance a highly important property of such systems. However, designing fault tolerance is a complex task in part due to the large scale of integration of SoC systems and different levels of abstraction provided by modern system design languages such as SystemC. Most existing methods enable fault injection and impact analysis as a means for increasing design dependability. Nonetheless, such methods provide little support for designing fault tolerance. To facilitate the design of fault tolerance in SoC systems, this paper proposes an approach for designing fault-tolerant inter-component communication protocols in SystemC transaction level modeling (TLM) programs. The proposed method includes four main steps, namely model extraction, fault modeling, addition of fault tolerance and refinement of fault tolerance to SystemC code. We demonstrate the proposed approach using a simple SystemC transaction level program that is subject to communication faults. Moreover, we illustrate how fault tolerance can be added to SystemC programs that use the base protocol of the TLM interoperability layer. We also illustrate how fault tolerance functionalities can be partitioned to software and hardware components. Finally, we put forward a roadmap for future research at the intersection of fault tolerance and hardware-software co-design.
nasa formal methods symposium | 2015
Reza Hajisheykhi; Ali Ebnenasir; Sandeep S. Kulkarni
We present the tool UFIT (Uppaal Fault Injector for Timed automata). In UFIT, we model five types of faults, namely, message loss, transient, byzantine, stuck-at, and fail-stop faults. Given the fault-free timed automata model and the selection of a type of fault, UFIT models the faults and generates the fault-affected timed automata model automatically. As a result, the designer can analyze the behavior of the model in the presence of faults. Moreover, there are several tools that extract timed automata models from higher-level programs. Hence, the designer can use UFIT to inject the faults into the extracted models.
formal methods | 2014
Borzoo Bonakdarpour; Reza Hajisheykhi; Sandeep S. Kulkarni
In this paper, we introduce a technique for repairing bugs in authentication protocols automatically. Although such bugs can be identified through sophisticated testing or verification methods, the state of the art falls short in fixing bugs in security protocols in an automated fashion. Our method takes as input a protocol and a logical property that the protocol does not satisfy and generates as output another protocol that satisfies the property. We require that the generated protocol must refine the original protocol in cases where the bug is not observed; i.e., repairing a protocol should not change the existing healthy behavior of the protocol. We use epistemic logic to specify and reason about authentication properties in protocols. We demonstrate the application of our method in repairing the 3-step Needham-Schroeders protocol. To our knowledge, this is the first application of epistemic logic in automated repair of security protocols.
network on chip architectures | 2013
Reza Hajisheykhi; Ali Ebnenasir; Sandeep S. Kulkarni
Since SoC (System on Chip) and NoC (Network on Chip) systems are getting more complex everyday, they are subject to different types of faults including timing faults. Timing has a significant importance in NoC systems. However, their fault-affected models are not studied extensively. In this paper, we present a method for modeling and analyzing timing faults in SystemC Transaction Level Modeling (TLM) programs. The proposed method includes three steps, namely timed model extraction, fault modeling and timed model checking. We use UPPAAL timed automata to formally model the SystemC TLM programs and monitor how the models behave in the presence of timing faults. We analyze our method using a case study. This case study utilizes loosely-timed coding style, which has a loose dependency between timing and data.
IEEE Transactions on Computers | 2017
Reza Hajisheykhi; Mohammad Roohitavaf; Sandeep S. Kulkarni
We focus on protocols for auditable restoration of distributed systems. The need for such protocols arises due to conflicting requirements (e.g., access to the system should be restricted but emergency access should be provided). One can design such systems with a tamper detection approach (based on the intuition of In-case-of-emergency-break-glass). However, in a distributed system, such tampering, which are denoted as auditable events, is visible only for a single node. This is unacceptable since the actions they take in these situations can be different than those in the normal mode. Moreover, eventually, the auditable event needs to be cleared so that system resumes the normal operation. With this motivation, in this paper, we present two protocols for auditable restoration, where any process can potentially identify an auditable event. The first protocol has an unbounded state space while the second protocol uses bounded state space that does not increase with the length of the computation. In both protocols, whenever a new auditable event occurs, the system must reach an auditable state where every process is aware of the auditable event. Only after the system reaches an auditable state, it can begin the operation of restoration. Although any process can observe an auditable event, we require that only authorized processes can begin the task of restoration. Moreover, these processes can begin the restoration only when the system is in an auditable state. Our protocols are self-stabilizing and can effectively handle the case where faults or auditable events occur during the restoration protocol. Moreover, they can be used to provide auditable restoration to other distributed protocols.
international conference on software engineering | 2014
Reza Hajisheykhi; Ali Ebnenasir; Sandeep S. Kulkarni
Since System on Chip (SoC) systems, where integrates all components of a computer or other electronic system into a single chip, are typically used for critical scenarios, it is desirable to analyze the impact of faults on them. However, fault-impact analysis is difficult at the RTL level due to the high integrity of SoC systems and different levels of abstraction provided by modern system design languages such as SystemC. Thus, modeling faults and impact analysis at different levels of abstraction is an important task and introduces dependability-related issues from the early phases of design. In this paper, we present a method for modeling and analyzing faults in SystemC TLM programs. The proposed method includes three steps, namely timed model extraction, fault modeling and fault analysis. We use UPPAAL timed automata to formally model the SystemC TLM programs and monitor how the models behave in the presence of faults. We analyze three case studies, two with Loosely-Timed coding style, and the other with Approximately-Timed coding style.
international conference on distributed computing systems workshops | 2014
Reza Hajisheykhi; Ali Ebnenasir; Sandeep S. Kulkarni
Since SoC systems are typically used for critical scenarios, it is desirable to analyze the impact of faults on them. However, fault-impact analysis is difficult due to the high integrity of SoC systems and different levels of abstraction provided by modern system design languages such as SystemC. In this paper, we present a method for modeling and analyzing permanent faults in SystemC TLM programs. The proposed method includes three steps, namely timed model extraction, fault modeling, and fault analysis. We use UPPAAL timed automata to formally model the SystemC TLM programs and monitor how the models behave in the presence of faults. A case study is also provided to better explain our proposed approach.
symposium on reliable distributed systems | 2015
Reza Hajisheykhi; Mohammad Roohitavaf; Sandeep S. Kulkarni
We focus on a protocol for auditable restoration of distributed systems. The need for such protocol arises due to conflicting requirements (e.g., access to the system should be restricted but emergency access should be provided). One can design such systems with a tamper detection approach (based on the intuition of break the glass door). However, in a distributed system, such tampering, which are denoted as auditable events, is visible only for a single node. This is unacceptable since the actions they take in these situations can be different than those in the normal mode. Moreover, eventually, the auditable event needs to be cleared so that system resumes the normal operation. With this motivation, in this paper, we present a protocol for auditable restoration, where any process can potentially identify an auditable event. Whenever a new auditable event occurs, the system must reach an auditable state where every process is aware of the auditable event. Only after the system reaches an auditable state, it can begin the operation of restoration. Although any process can observe an auditable event, we require that only authorized processes can begin the task of restoration. Moreover, these processes can begin the restoration only when the system is in an auditable state. Our protocol is self-stabilizing and can effectively handle the case where faults or auditable events occur during the restoration protocol. Moreover, it can be used to provide auditable restoration to other distributed protocol.
Journal of Parallel and Distributed Computing | 2015
Reza Hajisheykhi; Ling Zhu; Mahesh Arumugam; Murat Demirbas; Sandeep S. Kulkarni
We present a new shared memory model, SF shared memory model. In this model, the actions of each node are partitioned into slow actions and fast actions. By contrast, the traditional shared memory model only includes fast actions. Intuitively, slow actions can utilize slightly stale state information to execute successfully. However, fast actions require that the state information they use is most recent.We show that the use of slow actions can substantially benefit in improving performance of programs from the shared memory model to WAC model that has been designed for sensor networks. To illustrate this, we use three protocols concerning problems that need to be solved in sensor networks. We show that under various message loss probabilities, densities, etc., slow actions can improve the performance substantially, since slow actions reduce the performance penalty of fast actions under heavy message loss environments. Moreover, the effectiveness of the slow action increases when there is a higher probability of message loss. None of the existing computational models consider message loss/collision in the distributed systems.WAC model is a model that considers message loss in distributed systems. However, it reduces the performance.Our work is a variation of the shared memory model, namely SF shared memory model.It can improve the performance in the presence of message loss.We present an analytical proof (and evaluations for three protocols) for our SF model.
Science of Computer Programming | 2017
Reza Hajisheykhi; Ali Ebnenasir; Sandeep S. Kulkarni
We propose the notion of tamper-evident stabilization –that combines stabilization with the concept of tamper evidence– for computing systems. On the first glance, these notions are contradictory; stabilization requires that eventually the system functionality is fully restored whereas tamper evidence requires that the system functionality is permanently degraded in the event of tampering. Tamper-evident stabilization captures the intuition that the system will tolerate perturbation upto a limit. In the event that it is perturbed beyond that limit, it will exhibit permanent evidence of tampering, where it may provide reduced (possibly none) functionality. We compare tamper-evident stabilization with (conventional) stabilization and with active stabilization and propose an approach to verify tamper-evident stabilizing programs in polynomial time. We demonstrate tamper-evident stabilization with two examples and argue how approaches for designing stabilization can be used to design tamper-evident stabilization. We also study issues of composition in tamper-evident stabilization. Finally, we point out how tamper-evident stabilization can effectively be used to provide tradeoff between fault-prevention and fault tolerance.