Maciej Skorski
University of Warsaw
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maciej Skorski.
theory and application of cryptographic techniques | 2015
Stefan Dziembowski; Sebastian Faust; Maciej Skorski
Physical side-channel leakages are an important threat for cryptographic implementations. One of the most prominent countermeasures against such leakage attacks is the use of a masking scheme. A masking scheme conceals the sensitive information by randomizing intermediate values thereby making the physical leakage independent of the secret. An important practical leakage model to analyze the security of a masking scheme is the so-called noisy leakage model of Prouff and Rivain (Eurocrypt’13). Unfortunately, security proofs in the noisy leakage model require a technically involved information theoretic argument. Very recently, Duc et al. (Eurocrypt’14) showed that security in the probing model of Ishai et al. (Crypto’03) implies security in the noisy leakage model. Unfortunately, the reduction to the probing model is non-tight and requires a rather counter-intuitive growth of the amount of noise, i.e., the Prouff-Rivain bias parameter decreases proportional to the size of the set \({\mathcal X}\) of the elements that are leaking (e.g., if the leaking elements are bytes, then \(\left| {\mathcal X}\right| = 256\)). The main contribution of our work is to eliminate this non-optimality in the reduction by introducing an alternative leakage model, that we call the average probing model. We show a tight reduction between the noisy leakage model and the much simpler average random probing model; in fact, we show that these two models are essentially equivalent. We demonstrate the potential of this equivalence by two applications: We show security of the additive masking scheme used in many previous works for a constant bias parameter. We show that the compiler of Ishai et al. (Crypto’03) is secure in the average probing model (assuming a simple leak free component). This results into security with an optimal bias parameter of the noisy leakage for the ISW construction.
international conference on information theoretic security | 2015
Maciej Skorski
Metric entropy is a computational variant of entropy, often used as a convenient substitute of HILL Entropy which is the standard notion of entropy in many cryptographic applications, like leakage-resilient cryptography, deterministic encryption or memory delegation. In this paper we develop a general method to characterize metric-type computational variants of entropy, in a way depending only on properties of a chosen class of test functions (adversaries). As a consequence, we obtain a nice and elegant geometric interpretation of metric entropy. We apply these characterizations to simplify and modularize proofs of some important results, in particular: (a) computational dense model theorem (FOCS’08), (b) a variant of the Leftover Hash Lemma with improvements for square-friendly applications (CRYPTO’11) and (c) equivalence between unpredictability entropy and HILL entropy over small domains (STOC’12). We also give a new tight transformation between HILL and metric pseudoentropy, which implies the dense model theorem with best possible parameters.
international conference on information theoretic security | 2013
Maciej Skorski
The so-called leakage-chain rule is a very important tool used in many security proofs. It gives an upper bound on the entropy loss of a random variable \(X\) in case the adversary who having already learned some random variables \(Z_{1},\ldots ,Z_{\ell }\) correlated with \(X\), obtains some further information \(Z_{\ell +1}\) about \(X\). Analogously to the information-theoretic case, one might expect that also for the computational variants of entropy the loss depends only on the actual leakage, i.e. on \(Z_{\ell +1}\). Surprisingly, Krenn et al. have shown recently that for the most commonly used definitions of computational entropy this holds only if the computational quality of the entropy deteriorates exponentially in \(|(Z_{1},\ldots ,Z_{\ell })|\). This means that the current standard definitions of computational entropy do not allow to fully capture leakage that occurred “in the past”, which severely limits the applicability of this notion.
international conference on progress in cryptology | 2015
Krzysztof Pietrzak; Maciej Skorski
Computational notions of entropy a.k.a. pseudoentropy have found many applications, including leakage-resilient cryptography, deterministic encryption or memory delegation. The most important tools to argue about pseudoentropy are chain rules, which quantify by how much in terms of quantity and quality the pseudoentropy of a given random variable X decreases when conditioned on some other variable Z think for example of X as a secret key and Z as information leaked by a side-channel. In this paper we give a very simple and modular proof of the chain rule for HILL pseudoentropy, improving best known parameters. Our version allows for increasing the acceptable length of leakage in applications upi¾?to a constant factor compared to the best previous bounds. As a contribution of independent interest, we provide a comprehensive study of all known versions of the chain rule, comparing their worst-case strength and limitations.
International Conference on Applications and Techniques in Information Security | 2016
Maciej Skorski
The general approach to evaluate the quality of entropy sources used in true random number generators is to estimate min-entropy, which is based on estimating frequencies of all possible source outcomes. This method is space inefficient, for example for a source producing 30-bit outputs it needs \(30\,\mathrm {Gb}\) of storage to get an error smaller than one bit per sample.
international conference on information theoretic security | 2015
Maciej Skorski
Hardcore lemmas are results in complexity theory which state that average-case hardness must have a very hard “kernel”, that is a subset of instances where the given problem is extremely hard. They find important applications in hardness amplification. In this paper we revisit the following two fundamental results:
theory of cryptography conference | 2016
Stefan Dziembowski; Sebastian Faust; Maciej Skorski
During the last 15 years there have been intensive research efforts in constructing cryptographic algorithms resilient to the side-channel leakage. The most fundamental part of every such construction are the leakage-resilient encoding schemes. Usually the cryptographic secrets encoded by them are assumed to belong to some finite group \((\mathbb {G},+)\). The most common encoding scheme is the n-out-of-n additive secret sharing: a secret X is encoded as \((X_1,\ldots ,X_n)\) such that \(X_1 + \cdots X_n = X\). Intuitively, if an adversary receives only small partial independent information about each \(X_i\) then his information about X should be even smaller, and should decrease (i.e. the noise should amplify) when n grows. However, of course, the concrete parameters (the amount of leakage that can be tolerated, and the number of shares needed to achieve a given level of security) depend on the exact model that is used.
theory of cryptography conference | 2016
Krzysztof Pietrzak; Maciej Skorski
Computational notions of entropy have recently found many applications, including leakage-resilient cryptography, deterministic encryption or memory delegation. The two main types of results which make computational notions so useful are 1 Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable 2 Transformations, which quantify to which extend one type of entropy implies another. Such chain rules and transformations typically lose a significant amount in quality of the entropy, and are the reason why applying these results one gets rather weak quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions and its hard to imagine how non black-box reductions or adaptivity could be useful here. A variable X has k bits of HILL entropy of quality
international conference on information theoretic security | 2016
Maciej Skorski
International Conference on Cryptography and Information Security in the Balkans | 2015
Maciej Skorski
\epsilon ,s