Proofs of Security Theorems from Lecture IV Semantic security of randomized CTR mode Theorem 1. Let E be a block cipher over (K, X) that is ε-secure for some negligible ε. Then for any fixed IV , the cipher ECTR is δ-semantically secure for δ = 2ε + n2 2|X| , where n is the maximal number of blocks per message (i.e., M = X≤n ). Setup Let δ > 0 be arbitrary, We prove the following: for any semantic security adversary B which achieves advantage δ against ECTR, there exists a PRP-adversary A which achieves advantage δ 2 − n2 4|X| against the block cipher E. Hence, if all PRP-adversaries achieve advantage at most ε against E (i.e., if E is an ε-secure block cipher), we know that there cannot be an adversary against ECTR achieving advantage larger than δ = 2ε + n2 2|X| . Constructing A using B Let B be a semantic security adversary and let δ be its advantage against ECTR. We construct a PRP-adversary A as follows: 1. First, A samples a bit b from {0, 1} and a key k from K, both uniformly at random. This is to simulate a semantic security attack game for B. 2. Then A queries B for its two messages m0 and m1 (we know that they must be of the same length). Let m0 = z0,1 || z0,2 || · · · || z0,n m1 = z1,1 || z1,2 || · · · || z1,n, where the zi,j are the individual message blocks. 3. A then outputs n as the number of rounds it is going to play the PRP attack game. For each 1 ≤ i ≤ n it sets xi to [i]X , i.e. to a block containing the binary representation of number i. 4. A proceeds to interact with its own challenger: in every round 1 ≤ i ≤ n it sends out the block xi computed in the previous step and receives the corresponding block yi. 1 5. After the final round, A computes the ciphertext c = zb,1 ⊕ y1 || zb,2 ⊕ y2 || · · · || zb,n ⊕ yn and gives c to B. 6. Finally, A observes B’s output. If the output matches b (i.e., if B wins the game which A simulated for him), A outputs fake, otherwise it outputs rand. A’s advantage We now want to compute A’s advantage in its PRP attack game. Let us briefly recall how this game proceeds. The challenger randomly samples m ∈ {rand, fake}. • If m = rand, the challenger chooses a permutation f ∈ Π(X) uniformly at random. • If m = fake, the challenger chooses a key k ∈ K uniformly at random and sets f to E(k, ·), where E is the encryption function of the block cipher E. In both cases, the challenger then interacts with A in the n rounds as follows: in every round i, upon receiving the block xi, it computes yi = f(xi) and sends yi back to A. For brevity, let us denote by Bw the event that B wins its simulated game (i.e., correctly guesses A’s bit b) and by Bl the event that B loses. Similarly, let us denote by Mf the event that m = fake and by Mr the event that m = rand. The probability that A wins this attack game can be written, using the law of total probability1 , as P(A wins) = P(Mf ) · P(Bw | Mf ) + P(Mr) · P(Bl | Mr) = 1 2 · P(Bw | Mf ) + 1 2 · P(Bl | Mr). (1) Let us evaluate (or at least bound) each term in (1) individually. Evaluating P(Bw | Mf ) Observe that if m = fake, then the sequence of blocks y1 || y2 || · · · || yn produced by the interaction of A and its challenger is really a keystream that would be produced by the counter mode of E when using key k and IV equal to [1]X . Hence, in this case the message c is indeed an encryption of mb produced by ECTR using these parameters. Hence, from B’s point of view, the whole probabilistic experiment is really a semantic security attack game against ECTR, in which he has, by our assumption, advantage δ. Hence, the probability that he will win in this case equals P(Bw | Mf ) = 1 2 + δ. (2) 1https://en.wikipedia.org/wiki/Law_of_total_probability 2 Bounding P(Bl | Mr): Cheating game This part is trickier. First, let’s have the following thought experiment: imagine that in the whole process, the PRP challenger is cheating: if m = rand he will skip any random choice of f ∈ Π(X). Instead, whenever the challenger a block xi from A, he will simply sample a block yi uniformly at random from X and sends it back to A. Let us call this altered game a cheating game and denote its accompanying probability measure by Pch . Observe that in the cheating game, if m = rand, then y1 || y2 || · · · || yn is a keystream produced according to the perfectly secure one-time pad (OTP) cipher. Hence, in this case, from B’s point of view the whole process is the same as the semantic security attack game against OTP. Since OTP is perfectly secure, any adversary, including B, has zero advantage against it. Hence, Pch (Bw | Mr) = 1 2 (3) and thus Pch (Bl | Mr) = 1 − Pch (Bw | Mr) = 1 2 . (4) Cheating vs. non-cheating game However, Pch (Bl | Mr) ̸= P(Bl | Mr) since, Pch ̸= P. To see this, note that in the cheating game, if m = rand, it is possible for the keystream to contain two occurrences of the same block, i.e. yi = yj for some i ̸= j might happen with a positive probability. In the original game, this cannot happen, since f is always chosen to be a permutation and the blocks xi supplied by A are pairwise distinct (they form an increasing sequence of binary-encoded values). In essence, the cheating game can be viewed as a variant of the original game in which, in the case m = rand, the challenger does not randomly sample a permutation but an arbitrary (possibly non-injective) function of type X → X. Sampling a random function allows, for each input block, to select the output block uniformly at random from all possible blocks (even those that already appeared before), which is exactly what happens in the cheating game. The question of how this influences the probabilities of winning is addressed in what follows. Bounding P(Bl | Mr) Let C (short for “collision”) be the event that there are 1 ≤ i < j ≤ n such that yi = yj and let C be the event that no such collision happens. By the law of total probability, we have: Pch (Bw | Mr) = Pch (C | Mr)·Pch (Bw | Mr∧C)+Pch (C | Mr)·Pch (Bw | Mr∧C). (5) Noting that all the probabilities are non-negative, we can get rid of the first term on the right-hand side of (5) to get Pch (Bw | Mr) ≥ Pch (C | Mr) · Pch (Bw | Mr ∧ C). (6) 3 Now observe that if a collision does not happen in the cheating game, then this game evolves in exactly the same way as the original game. The original game is exactly the cheating game constrained by the condition that a collision never happens in the case when m = rand. In particular, Pch (A | Mr ∧ C) = P(A | Mr) for any event A.2 This can be plugged into (6) to get Pch (Bw | Mr) ≥ Pch (C | Mr) · P(Bw | Mr). By rearranging and using (3) we get P(Bw | Mr) ≤ Pch (Bw | Mr) Pch (C | Mr) = 1 2 · Pch (C | Mr) . (7) Hence, P(Bl | Mr) = 1 − P(Bw | Mr) ≥ 1 − 1 2 · Pch (C | Mr) = 2 · Pch (C | Mr) − 1 2 · Pch (C | Mr) ≤1 ≥ 2 · Pch (C | Mr) − 1 2 = Pch (C | Mr) =1−Pch (C|Mr) − 1 2 = 1 2 − Pch (C | Mr). Thus, we proved that P(Bl | Mr) ≥ 1 2 − Pch (C | Mr). (8) To conclude our proof, it remains to bound the probability Pch (C | Mr) of a collision happening in the cheating game. Probability of collision Recall that a cheating game having a collision (in the case when m = rand) means that there are two distinct block indices 1 ≤ i < j ≤ n such that yi = yj. For a given pair of distinct indices i, j we have Pch (yi = yj | Mr) = 1 |X| (the yi can be fixed arbitrarily but then yj, which is sampled uniformly from X, must be sampled to the same value as yi). Using union bound 3 we get 2We do not prove this formally, but note that the probability of generating a concrete keystream y1 || . . . || yn is the same under Pch (· | Mr ∧ C) and P(· | Mr) – this can be proved directly using the definition of conditional probability. Since B’s decisions depend only on the ciphertext he receives, and any difference in the ciphertexts he receives in the two games must be caused by the difference of the keystream, the equality follows. 3https://en.wikipedia.org/wiki/Boole%27s_inequality 4 Pch (C | Mr) ≤ 1≤i