Damien Imbs
University of Rennes
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Damien Imbs.
principles of distributed computing | 2009
Damien Imbs; José Ramón González de Mendívil; Michel Raynal
This BA presents a general consistency condition for software transactionnal memories.
Theoretical Computer Science | 2012
Damien Imbs; Michel Raynal
The aim of a Software Transactional Memory (STM) is to discharge the programmers from the management of synchronization in multiprocess programs that access concurrent objects. To that end, an STM system provides the programmer with the concept of a transaction. The job of the programmer is to design each process the application is made up of as a sequence of transactions. A transaction is a piece of code that accesses concurrent objects, but contains no explicit synchronization statement. It is the job of the underlying STM system to provide the illusion that each transaction appears as being executed atomically. Of course, for efficiency, an STM system has to allow transactions to execute concurrently. Consequently, due to the underlying STM concurrency management, a transaction commits or aborts. This paper first presents a new STM consistency condition, called virtual world consistency. This condition states that no transaction reads object values from an inconsistent global state. It is similar to opacity for the committed transactions but weaker for the aborted transactions. More precisely, it states that (1) the committed transactions can be totally ordered, and (2) the values read by each aborted transaction are consistent with respect to its causal past. Hence, virtual world consistency is weaker than opacity while keeping its spirit. Then, assuming the objects shared by the processes are atomic read/write objects, the paper presents an STM protocol that ensures virtual world consistency (while guaranteeing the invisibility of the read operations). From an operational point of view, this protocol is based on a vector-clock mechanism. Finally, the paper considers the case where the shared objects are regular read/write objects. It also shows how the protocol can easily be weakened while still providing an STM system that satisfies causal consistency, a condition strictly weaker than virtual world consistency.
international conference on principles of distributed systems | 2008
Damien Imbs; Michel Raynal
The aim of a software transactional memory (STM) system is to facilitate the delicate problem of low-level concurrency management, i.e. the design of programs made up of processes/threads that concurrently access shared objects. To that end, a STM system allows a programmer to write transactions accessing shared objects, without having to take care of the fact that these objects are concurrently accessed: the programmer is discharged from the delicate problem of concurrency management. Given a transaction, the STM system commits or aborts it. Ideally, it has to be efficient (this is measured by the number of transactions committed per time unit), while ensuring that as few transactions as possible are aborted. From a safety point of view (the one addressed in this paper), a STM system has to ensure that, whatever its fate (commit or abort), each transaction always operates on a consistent state. n nSTM systems have recently received a lot of attention. Among the proposed solutions, lock-based systems and clock-based systems have been particularly investigated. Their design is mainly efficiency-oriented, the properties they satisfy are not always clearly stated, and few of them are formally proved. This paper presents a lock-based STM system designed from simple basic principles. Its main features are the following: it (1) uses visible reads, (2) does not require the shared memory to manage several versions of each object, (3) uses neither timestamps, nor version numbers, (4) satisfies the opacity safety property, (5) aborts a transaction only when it conflicts with some other live transaction (progressiveness property), (6) never aborts a write only transaction, (7) employs only bounded control variables, (8) has no centralized contention point, and (9) is formally proved correct.
principles of distributed computing | 2010
Damien Imbs; Michel Raynal
The Borowsky-Gafni (BG) simulation algorithm is a powerful reduction algorithm that shows that <i>t</i>-resilience of decision tasks can be fully characterized in terms of wait-freedom. Said in another way, the BG simulation shows that the crucial parameter is not the number <i>n</i> of processes but the upper bound <i>t</i> on the number of processes that are allowed to crash. The BG algorithm considers colorless decision tasks in the base read/write shared memory model. (Colorless means that if, process decides a value, any other process is allowed to decide the very same value.)n This paper considers system models made up of <i>n</i> processes prone to up to <i>t</i> crashes, and where the processes communicate by accessing read/write atomic registers (as assumed by the BG) and (differently from the BG) objects with consensus number <i>x</i> accessible by at most <i>x</i> processes (with <i>x</i> ≤ <i>t</i> < <i>n</i>). Let <i>ASM</i>(<i>n,t,x</i>) denote such a system model. While the BG simulation has shown that the models <i>ASM</i>(<i>n,t</i>,1) and <i>ASM</i>(<i>t</i>+1,<i>t</i>,1) are equivalent, this paper focuses the pair (<i>t,x</i>) of parameters of a system model. Its main result is the following: the system models <i>ASM</i> (<i>n</i><sub>1</sub>,<i>t</i><sub>1</sub>,<i>x</i><sub>1</sub>) and <i>ASM</i> (<i>n</i><sub>2</sub>,<i>t</i><sub>2</sub>,<i>x</i><sub>2</sub>) have the same computational power for colorless decision tasks if and only if ⌊<i>t</i><sub>1</sub>⁄<i>x</i><sub>1</sub>⌋ = ⌊<i>t</i><sub>1</sub>⁄<i>x</i><sub>1</sub>⌋. As can be seen, this contribution complements and extends the BG simulation. It shows that consensus numbers have a multiplicative power with respect to failures, namely the system models <i>ASM</i>(<i>n,t,x</i>) and <i>ASM</i>(<i>n,t</i>,1) are equivalent for colorless decision tasks iff (<i>t</i> x <i>x</i>) ≤ <i>t</i> ≤ (<i>t</i> x <i>x</i>)+(<i>x</i>-1).
international conference on structural information and communication complexity | 2009
Damien Imbs; Michel Raynal
The aim of a Software Transactional Memory (STM) is to discharge the programmers from the management of synchronization in multiprocess programs that access concurrent objects. To that end, a STM system provides the programmer with the concept of a transaction. The job of the programmer is to design each process the application is made up of as a sequence of transactions. A transaction is a piece of code that accesses concurrent objects, but contains no explicit synchronization statement. It is the job of the underlying STM system to provide the illusion that each transaction appears as being executed atomically. Of course, for efficiency, a STM system has to allow transactions to execute concurrently. Consequently, due to the underlying STM concurrency management, a transaction commits or aborts. n nThis paper first presents a new STM consistency condition, called virtual world consistency. This condition states that no transaction reads object values from an inconsistent global state. It is similar to opacity for the committed transactions but weaker for the aborted transactions. More precisely, it states that (1) the committed transactions can be totally ordered, and (2) the values read by each aborted transaction are consistent with respect to its causal past only. Hence, virtual world consistency is weaker than opacity while keeping its spirit. Then, assuming the objects shared by the processes are atomic read/write objects, the paper presents a STM protocol that ensures virtual world consistency (while guaranteeing the invisibility of the read operations). From an operational point of view, this protocol is based on a vector-clock mechanism. Finally, the paper considers the case where the shared objects are regular read/write objects. It also shows how the protocol can easily be weakened while still providing an STM system that satisfies causal consistency, a condition strictly weaker than virtual world consistency.
international symposium on stabilization safety and security of distributed systems | 2009
Damien Imbs; Michel Raynal
The Borowsky-Gafni (BG ) simulation algorithm is a powerful tool that allows a set of t + 1 asynchronous sequential processes to wait-free simulate (i.e., despite the crash of up to t of them) a large number n of processes under the assumption that at most t of these processes fail (i.e., the simulated algorithm is assumed to be t -resilient). The BG simulation has been used to prove solvability and unsolvability results for crash-prone asynchronous shared memory systems. n nIn its initial form, the BG simulation applies only to colorless decision tasks, i.e., tasks in which nothing prevents processes to decide the same value (e.g., consensus or k -set agreement tasks). Said in another way, it does not apply to decision problems such as renaming where no two processes are allowed to decide the same new name. Very recently (STOC 2009), Eli Gafni has presented an extended BG simulation algorithm (GeBG) that generalizes the basic BG algorithm by extending it to colored decision tasks such as renaming. His algorithm is based on a sequence of sub-protocols where a sub-protocol is either the base agreement protocol that is at the core of BG simulation, or a commit-adopt protocol. n nThis paper presents the core of an extended BG simulation algorithm that is particularly simple. This algorithm is based on two underlying objects: the base agreement object used in the BG simulation (as does GeBG), and (differently from GeBG) a new simple object that we call arbiter . More precisely, (1) while each of the n simulated processes is simulated by each simulator, (2) each of the first t + 1 simulated processes is associated with a predetermined simulator that we called its owner. The arbiter object is used to ensure that the permanent blocking (crash) of any of these t + 1 simulated processes can only be due to the crash of its owner simulator. After being presented in a modular way, the proposed extended BG simulation algorithm is proved correct.
international symposium on distributed computing | 2009
Damien Imbs; Michel Raynal
An atomic snapshot object is an object that can be concurrently accessed by asynchronous processes prone to crash. It is made of m components (base atomic registers) and is defined by two operations: an update operation that allows a process to atomically assign a new value to a component and a snapshot operation that atomically reads and returns the values of all the components. To cope with the net effect of concurrency, asynchrony and failures, the algorithm implementing the update operation has to help concurrent snapshot operations so that they always terminate. n nThis paper is on partial snapshot objects. Such an object provides a snapshot operation that can take any subset of the components as input parameter, and atomically reads and returns the values of this subset of components. The paper has two contributions. The first is the introduction of two properties for partial snapshot object algorithms, called help-locality and freshness. Help-locality requires that an update operation helps only the concurrent partial snapshot operations that read the component it writes. When an update of a component r helps a partial snapshot, freshness requires that the update provides the partial snapshot with a value of the component r that is at least as recent as the value it writes into that component. (No snapshot algorithm proposed so far satisfies these properties.) The second contribution consists of an update and a partial snapshot algorithms that are wait-free, linearizable and satisfy the previous efficiency properties. Interestingly, the principle that underlies the proposed algorithms is different from the one used so far, namely, it is based on the write first, and help later strategy. An improvement of the previous algorithms is also presented. Based on LL/SC atomic registers (instead of read/write registers) this improvement decreases the number of base registers from O(n2) to O(n). This shows an interesting tradeoff relating the synchronization power of the base operations and the number of base atomic registers when using the write first, and help later strategy.
international conference of distributed computing and networking | 2009
Damien Imbs; Michel Raynal
The aim of a Software Transactional Memory (STM) is to discharge the programmers from the management of synchronization in multiprocess programs that access concurrent objects. To that end, a STM system provides the programmer with the concept of a transaction : each sequential process is decomposed into transactions, where a transaction encapsulates a piece of code accessing concurrent objects. A transaction contains no explicit synchronization statement and appears as if it has been executed atomically. Due to the underlying concurrency management, a transaction commits or aborts. n nThe major part of papers devoted to STM systems address mainly their efficiency. Differently, this paper focuses on an orthogonal issue, namely, the design and the statement of a safety property. The only safety property that is usually considered is a global property involving all the transactions (e.g., conflict-serializability or opacity) that expresses the correction of the whole execution. Roughly speaking, these consistency properties do not prevent a STM system from aborting all the transactions. The proposed safety property, called obligation , is on each transaction taken individually. It specifies minimal circumstances in which a STM system must commit a transaction T . The paper proposes and investigates such an obligation property. Then, it presents a STM algorithm that implements it. This algorithm, which is based on a logical clock and associates a lock with each shared object, is formally proved correct.
Journal of Parallel and Distributed Computing | 2012
Damien Imbs; Michel Raynal
An atomic snapshot object is an object that can be concurrently accessed by asynchronous processes prone to crash. It is made of m components (base atomic registers) and is defined by two operations: an update operation that allows a process to atomically assign a new value to a component and a snapshot operation that atomically reads and returns the values of all the components. To cope with the net effect of concurrency, asynchrony and failures, the algorithm implementing the update operation has to help concurrent snapshot operations so that they always terminate. This paper is on partial snapshot objects. Such an object provides a snapshot operation that can take any subset of the components as input parameter, and atomically reads and returns the values of this subset of components. The paper has two contributions. The first is the introduction of two properties for partial snapshot object algorithms, called help-locality and freshness. Help-locality requires that an update operation helps only the concurrent partial snapshot operations that read the component it writes. When an update of a component r helps a partial snapshot, freshness requires that the update provides the partial snapshot with a value of the component r that is at least as recent as the value it writes into that component. (No snapshot algorithm proposed so far satisfies these properties). The second contribution consists of an update and a partial snapshot algorithm that are wait-free, linearizable and satisfy the previous efficiency properties. Interestingly, the principle that underlies the proposed algorithms is different from the one used so far, namely, it is based on the write first, and help later strategy. An improvement of the previous algorithms is also presented. Based on LL/SC atomic registers (instead of read/write registers), this improvement decreases the number of base registers from O(n^2) to O(n). This shows an interesting tradeoff relating the synchronization power of the base operations and the number of base atomic registers when using the write first, and help later strategy.
international conference on algorithms and architectures for parallel processing | 2011
Tyler Crain; Damien Imbs; Michel Raynal
The aim of a Software Transactional Memory (STM) is to discharge the programmers from the management of synchronization in multiprocess programs that access concurrent objects. To that end, an STM system provides the programmer with the concept of a transaction. The job of the programmer is to design each process the application is made up of as a sequence of transactions. A transaction is a piece of code that accesses concurrent objects, but contains no explicit synchronization statement. It is the job of the underlying STM system to provide the illusion that each transaction appears as being executed atomically. Of course, for efficiency, an STMsystem has to allow transactions to execute concurrently. Consequently, due to the underlying STM concurrency management, a transaction commits or aborts. n nThis paper studies the relation between two STM properties (read invisibility and permissiveness) and two consistency conditions for STM systems, namely, opacity and virtual world consistency. Both conditions ensure that any transaction (be it a committed or an aborted transaction) reads values from a consistent global state, a noteworthy property if one wants to prevent abnormal behavior from concurrent transactions that behave correctly when executed alone. A read operation issued by a transaction is invisible if it does not entail shared memory modifications. This is an important property that favors efficiency and privacy. An STM system is permissive (respectively probabilistically permissive) with respect to a consistency condition if it accepts (respectively accepts with positive probability) every history that satisfies the condition. This is a crucial property as a permissive STM system never aborts a transaction for free. The paper first shows that read invisibility, probabilistic permissiveness and opacity are incompatible, which means that there is no probabilistically permissive STM system that implements opacity while ensuring read invisibility. It then shows that read invisibility, probabilistic permissiveness and virtual world consistency are compatible. To that end the paper describes a new STM protocol called IR_VWC_P. This protocol presents additional noteworthy features: it uses only base read/write objects and locks which are used only at commit time; it satisfies the disjoint access parallelism property; and, in favorable circumstances, the cost of a read operation is O(1).