CODING, CRYPTOGRAPHY and CRYPTOGRAPHIC PROTOCOLS Prof. Jozef Gruska DrSc CONTENTS 1. Basics of coding theory 2. Linear codes 3. Cyclic codes 4. Secret-key cryptosystems 5. Public-key cryptosystems, I. Key exchange, knapsack, RSA 6. Public-key cryptosystems, II. Other cryptosystems, security, PRG, hash functions 7. Digital signatures 8. Elliptic curves cryptography and factorization 9. Identification, authentication, secret sharing and e-commerce 10. Protocols to do seemingly impossible and zero-knowledge protocols 11a. Steganography and Watermarking 11b. From theory to practice in cryptography 12. Quantum cryptography LITERATURE • R. Hill: A first course in coding theory, Claredon Press, 1985 • V. Pless: Introduction to the theory of error-correcting codes, John Willey, 1998 • J. Gruska: Foundations of computing, Thomson International Computer Press, 1997 • A. Salomaa: Public-key cryptography, Springer, 1990 • D. R. Stinson: Cryptography: theory and practice, CRC Press, 1995 • W. Trappe, L. Washington: Introduction to cryptography with coding theory • B. Schneier: Applied cryptography, John Willey and Sons, 1996 • J. Gruska: Quantum computing, McGraw-Hill, 1999 (For additions and updatings: http://www.mcgraw-hill.co.uk/gruska) • S. Singh, The code book, Anchor Books, 1999 • D. Kahn: The codebreakers. Two story of secret writing. Macmillan, 1996 (An entertaining and informative history of cryptography.) INTRODUCTION • Transmission of classical information in time and space is nowadays very easy (through noiseless channel). It took centuries, and many ingenious developments and discoveries (writing, book printing, photography, movies, telegraph, telephone, radio transmissions,TV, -sounds recording – records, tapes, discs) and the idea of the digitalisation of all forms of information to discover fully this property of information. Coding theory develops methods to protect information against a noise. • Information is becoming an increasingly valuable commodity for both individuals and society. Cryptography develops methods how to ensure secrecy of information and privacy of users. • A very important property of information is that it is often very easy to make unlimited number of copies of information. Steganography develops methods to hide important information in innocently looking information (and that can be used to protect intellectual properties). HISTORY OF CRYPTOGRAPHY The history of cryptography is the story of centuries-old battles between codemakers (ciphermakers) and codebreakers (cipherbreakers), an intellectual arms race that has had a dramatic impact on the course of history. The ongoing battle between codemakers and codebreakers has inspired a whole series of remarkable scientific breakthroughts. History is full of ciphers. They have decided the outcomes of battles and led to the deaths of kings and queens. Security of communication and data and privacy of users are of key importance for information society. Cryptography, broadly understood, is an important tool to achieve such a goal. CHAPTER 1: Basics of coding theory ABSTRACT Coding theory - theory of error correcting codes - is one of the most interesting and applied part of mathematics and informatics. All real communication systems that work with digitally represented data, as CD players, TV, fax machines, internet, satellites, mobiles, require to use error correcting codes because all real channels are, to some extent, noisy – due to interference caused by environment § Coding theory problems are therefore among the very basic and most frequent problems of storage and transmission of information. § Coding theory results allow to create reliable systems out of unreliable systems to store and/or to transmit information. § Coding theory methods are often elegant applications of very basic concepts and methods of (abstract) algebra. This first chapter presents and illustrates the very basic problems, concepts, methods and results of coding theory. Coding - basic concepts Without coding theory and error-correcting codes there would be no deep-space travel and pictures, no satellite TV, no compact disc, no … no … no …. Error-correcting codes are used to correct messages when they are transmitted through noisy channels. CHANNEL is any physical medium through which information is transmitted. (Telephone lines and the atmosphere are examples of channels.) IMPORTANCE of ERROR-CORRECTING CODES BASIC IDEA The details of techniques used to protect information against noise in practice are sometimes rather complicated, but basic principles are easily understood. The key idea is that in order to protect a message against a noise, we should encode the message by adding some redundant information to the message. In such a case, even if the message is corrupted by a noise, there will be enough redundancy in the encoded message to recover- to decode the message completely. EXAMPLE In case of: encoding 0®000 1 111 the probability of the bit error p £ , and the majority voting decoding 000, 001, 010, 100  000, 111, 110, 101, 011  111 the probability of an erroneous decoding (if there are 2 or 3 errors) is EXAMPLE: Coding of a path avoiding an enemy territory Story Alice and Bob share an identical map (Fig. 1) gridded as shown in Fig.1. Only Alice knows the route through which Bob can reach her avoiding the enemy territory. Alice wants to send Bob the following information about the safe route he should take. Basic terminology Block code - a code with all words of the same length. Codewords - words of some code. Hamming distance The intuitive concept of “closeness'' of two words is well formalized through Hamming distance h(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols in which the words x and y differ. Example: h(10101, 01100) = 3, h(fourth, eighth) = 4 Binary symmetric channel Consider a transition of binary symbols such that each symbol has probability of error p < 1/2. Binary symmetric channel If n symbols are transmitted, then the probability of t errors is In the case of binary symmetric channels, the ”nearest neighbour decoding strategy” is also “maximum likelihood decoding strategy''. Example Consider C = {000, 111} and the nearest neighbour decoding strategy. Probability that the received word is decoded correctly as 000 is (1 - p)^3 + 3p(1 - p)^2, as 111 is (1 - p)^3 + 3p(1 - p)^2. Therefore P[err][ ](C) = 1 - ((1 - p)^3 + 3p(1 - p)^2) is probability of erroneous decoding. Example If p = 0.01, then P[err] (C) = 0.000298 and only one word in 3555 will reach the user with an error. POWER of PARITY BITS Example Let all 2^11 of binary words of length 11 be codewords. Let the probability p of a bit error be 10 ^-8. Let bits be transmitted at the rate 10^7 bits per second. The probability that a word is transmitted incorrectly is approximately Therefore of words per second are transmitted incorrectly. One wrong word is transmitted every 10 seconds, 360 erroneous words every hour and 8640 words every day without being detected! Let now one parity bit be added. Any single error can be detected!!! The probability of at least two errors is: Therefore approximately words per second are transmitted with an undetectable error. Corollary One undetected error occurs only every 2000 days! (2000 » 10^9/(5.5 ´ 86400).) TWO-DIMENSIONAL PARITY CODE The two-dimensional parity code arranges the data into a two-dimensional array and then to each row (column) parity bit is attached. Example Binary string 10001011000100101111 is represented and encoded as follows Question How much better is two-dimensional encoding than one-dimensional encoding? Notation and Examples Notation: An (n,M,d)-code C is a code such that • n - is the length of codewords. • M - is the number of codewords. • d - is the minimum distance in C. Examples from deep space travels Examples (Transmission of photographs from the deep space) • In 1965-69 Mariner 4-5 took the first photographs of another planet - 22 photos. Each photo was divided into 200 ´ 200 elementary squares - pixels. Each pixel was assigned 6 bits representing 64 levels of brightness. Hadamard code was used. Transmission rate: 8.3 bits per second. • In 1970-72 Mariners 6-8 took such photographs that each picture was broken into 700 ´ 832 squares. Reed-Muller (32,64,16) code was used. Transmission rate was 16200 bits per second. (Much better pictures) HADAMARD CODE In Mariner 5, 6-bit pixels were encoded using 32-bit long Hadamard code that could correct up to 7 errors. Hadamard code has 64 codewords. 32 of them are represented by the 32 ´ 32 matrix H = {h[IJ]}, where 0 £ i, j £ 31 and where i and j have binary representations i = a[4]a[3]a[2]a[1]a[0], j = b[4]b[3]b[2]b[1]b[0.] [ ] The remaing 32 codewords were represented by the matrix -H. Decoding was quite simple. CODE RATE For q-nary (n,M,d)-code we define code rate, or information rate, R, by The code rate represents the ratio of the number of needed input data symbols to the number of transmitted code symbols. Code rate (6/32 for Hadamard code), is an important parameter for real implementations, because it shows what fraction of the bandwidth is being used to transmit actual data. The ISBN-code Each book till 1.1.2007 had International Standard Book Number which was a 10-digit codeword produced by the publisher with the following structure: l p m w = x[1] … x[10] [ ]language publisher number weighted check sum 0 07 709503 0 such that The publisher had to put X into the 10-th position if x[10] = 10. The ISBN code was designed to detect: (a) any single error (b) any double error created by a transposition The ISBN-code Transposition detection Let x[J] and x[k] be exchanged. New ISBN code Starting 1.1.2007 instead of 10-digit ISBN code a 13-digit ISBN code is being used. New ISBN number can be obtained from the old one by preceeding The old code with three digits 978. For details about 13-digit ISBN see http://www.isbn-international.org/en/revision.html Equivalence of codes Definition Two q -ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: (a) a permutation of the positions of the code. (b) a permutation of symbols appearing in a fixed position. Question: Let a code be displayed as an M ´ n matrix. To what correspond operations (a) and (b)? Claim: Distances between codewords are unchanged by operations (a), (b). Consequently, equivalent codes have the same parameters (n,M,d) (and correct the same number of errors). The main coding theory problem A good (n,M,d) -code has small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: A[q][ ](n,d) is the largest M such that there is an q -nary (n,M,d) -code. Theorem (a) A[q][ ](n,1) = q^n; (b) A[q][ ](n,n) = q. Proof (a) obvios; (b) Let C be an q -nary (n,M,n) -code. Any two distinct codewords of C differ in all n positions. Hence symbols in any fixed position of M codewords have to be different Þ A[q][ ](n,n) £ q. Since the q -nary repetition code is (n,q,n) -code, we get A[q][ ](n,n) ³ q. EXAMPLE Example Proof that A[2][ ](5,3) = 4. (a) Code C[3] is a (5,4,3) -code, hence A[2][ ](5,3) ³ 4. (b) Let C be a (5,M,3) -code with M = 5. • By previous lemma we can assume that 00000 Î C. • C has to contain at most one codeword with at least four 1's. (otherwise d (x,y) £ 2 for two such codewords x, y) • Since 00000 Î C there can be no codeword in C with at most one or two 1. • Since d = 3 C cannot contain three codewords with three 1's. • Since M ³ 4 there have to be in C two codewords with three 1's. (say 11100, 00111), the only possible codeword with four or five 1's is then 11011. Design of one code from another code Theorem Suppose d is odd. Then a binary (n,M,d) -code exists iff a binary (n +1,M,d +1) -code exists. Proof Only if case: Let C be a binary code (n,M,d) -code. Let Since parity of all codewords in C´ is even, d(x´,y´) is even for all x´,y´ Î C´. Hence d(C´) is even. Since d £ d(C´) £ d +1 and d is odd, d(C´) = d +1. Hence C´ is an (n +1,M,d +1) -code. If case: Let D be an (n +1,M,d +1) -code. Choose code words x, y of D such that d(x,y) = d +1. Find a position in which x, y differ and delete this position from all codewords of D. Resulting code is an (n,M,d) -code. A corollary Corollary: If d is odd, then A[2][ ](n,d) = A[2][ ](n +1,d +1). If d is even, then A[2][ ](n,d) = A[2][ ](n -1,d -1). Example A[2][ ](5,3) = 4 Þ A[2][ ](6,4) = 4 (5,4,3) -code Þ (6,4,4) –code 0 0 0 0 0 0 1 1 0 1 1 0 1 1 0 by adding check. 1 1 0 1 1 A sphere and its contents Notation F[q]^n – is a set of all words of length n over the alphabet {0,1,2,…,q -1} Definition For any codeword u Î F[q]^n and any integer r ³ 0 the sphere of radius r and centre u is denoted by S (u,r) = {v Î F[q]^n | d (u,v) £ r }. Theorem A sphere of radius r in F[q]^n, 0 £ r £ n contains words. General upper bounds Theorem (The sphere-packing or Hamming bound) If C is a q -nary (n,M,2t +1) -code, then (1) A general upper bound on A[q][ ](n,d) Example An (7,M,3) -code is perfect if i.e. M = 16 An example of such a code: C[4] = {0000000, 1111111, 1000101, 1100010, 0110001, 1011000, 0101100, 0010110, 0001011, 0111010, 0011101, 1001110, 0100111, 1010011, 1101001, 1110100} Table of A[2](n,d) from 1981 For current best results see http://www.win.tue.nl/math/dw/voorlincod.html LOWER BOUND for A[q][ ](n,d) The following lower bound for A[q][ ](n,d) is known as Gilbert-Varshanov bound: Theorem Given d £ n, there exists a q -ary (n,M,d) -code with and therefore Error Detection Error detection is much more modest aim than error correction. Error detection is suitable in the cases that channel is so good that probability of error is small and if an error is detected, the receiver can ask to renew the transmission. For example, two main requirements for many telegraphy codes used to be: • Any two codewords had to have distance at least 2; • No codeword could be obtained from another codeword by transposition of two adjacent letters. Pictures of Saturn taken by Voyager Pictures of Saturn taken by Voyager, in 1980, had 800 × 800 pixels with 8 levels of brightness. Since pictures were in color, each picture was transmitted three times; each time through different color filter. The full color picture was represented by 3 × 800 × 800 × 8 = 13360000 bits. To transmit pictures Voyager used the Golay code G[24]. General coding problem Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). The entropy of X is defined by and it is considered to be the information content of X. In a special case of a binary variable X which takes on the value 1 with probability p and the value 0 with probability 1 – p S(X) = H(p) = -p lg p - (1 - p)lg(1 - p) Problem: What is the minimal number of bits needed to transmit n values of X? Basic idea: To encode more probable outputs of X by shorter binary words. Example (Morse code - 1838) a .- b -… c -.-. d -.. e . f ..-. g --. h …. i .. j .--- k -.- l .-.. m -- n -. o --- p .--. q --.- r .-. s … t - u ..- v …- w .-- x -..- y -.-- z --.. Shannon's noisless coding theorem Shannon's noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. Example Let a source X produce the value 1 with probability p = ¼ and the value 0 with probability 1 - p = ݯ Assume we want to encode blocks of the outputs of X of length 4. By Shannon's theorem we need 4H (¼) = 3.245 bits per blocks (in average) A simple and practical method known as Huffman code requires in this case 3.273 bits per a 4-bit message. mess. code mess. code mess. code mess. Code 0000 10 0100 010 1000 011 1100 11101 0001 000 0101 11001 1001 11011 1101 111110 0010 001 0110 11010 1010 11100 1110 111101 0011 11000 0111 1111000 1011 111111 1111 1111001 Observe that this is a prefix code - no codeword is a prefix of another codeword. Design of Huffman code Given a sequence of n objects, x[1],…,x[n] with probabilities p[1] ³ … ³ p[n]. Stage 1 - shrinking of the sequence. • Replace x [n][ ][-1], x [n] with a new object y [n][ ][-1] with probability p [n][ ][-1] + p [n] and rearrange sequence so one has again non-increasing probabilities. • Keep doing the above step till the sequence shrinks to two objects. Design of Huffman code A BIT OF HISTORY The subject of error-correcting codes arose originally as a response to practical problems in the reliable communication of digitally encoded information. The discipline was initiated in the paper Claude Shannon: A mathematical theory of communication, Bell Syst.Tech. Journal V27, 1948, 379-423, 623-656 Shannon's paper started the scientific discipline information theory and error-correcting codes are its part. Originally, information theory was a part of electrical engineering. Nowadays, it is an important part of mathematics and also of informatics. A BIT OF HISTORY SHANNON's VIEW In the introduction to his seminal paper ”A mathematical theory of communication” Shannon wrote: The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.