Part I Basics of coding theory and linear codes CODING and CRYPTOGRAPHY IV054 1. Basics of coding theory and linear codes 2/77 CODING and CRYPTOGRAPHY IV054 IV054 1. Basics of coding theory and linear codes 2/77 CODING and CRYPTOGRAPHY IV054 Modern Coding theory is a very beautiful and often very surprising mathematical th that is very much applied and broadly used for transmission of digital information, without which modern telecommunication would be practically impossible. IV054 1. Basics of coding theory and linear codes 2/77 CODING and CRYPTOGRAPHY IV054 Modern Coding theory is a very beautiful and often very surprising mathematical theory that is very much applied and broadly used for transmission of digital information, without which modern telecommunication would be practically impossible. . Mostly everyone is daily using outcomes of modern coding and decoding. Modern Cryptography is rich on clever use of beautiful and often much surprising concepts and methods that allows to use outcomes of modern classical and also surprisingly quantum tools, to make transmission of information so safe that even very powerful eavesdropper has next to zero chance to read transmitted information that not intended to him. IV054 1. Basics of coding theory and linear codes 2/77 CODING and CRYPTOGRAPHY IV054 Modern Coding theory is a very beautiful and often very surprising mathematical theory that is very much applied and broadly used for transmission of digital information, without which modern telecommunication would be practically impossible. . Mostly everyone is daily using outcomes of modern coding and decoding. Modern Cryptography is rich on clever use of beautiful and often much surprising concepts and methods that allows to use outcomes of modern classical and also surprisingly quantum tools, to make transmission of information so safe that even very powerful eavesdropper has next to zero chance to read transmitted information that not intended to him. In spite of the fact that both coding and cryptography areas have already many very efficient systems using only very small memories, new and new applications require to develop again and again new, faster, and less memory demanding systems for both coding and cryptography. IV054 1. Basics of coding theory and linear codes 2/77 Teaching stuff ■ Prof. Jozef Gruska DrSc - lecturer ■ RNDr. Matej Pivoluska PhD - tutorials end CRYPTO team member RNDr Lukáš Boháč - head of CRYPTO-team ■ Mgr. Libor Cáha PhD, member of CRYPTO-team Be. Henrieta Michelová, member of CRYPTO-team Be. Roman Oravec, member of CRYPTO-team 3/77 Teaching stuff ■ Prof. Jozef G ruska DrSc - lecturer ■ RNDr. Matej Pivoluska PhD - tutorials end CRYPTO team member ■ RNDr Lukáš Boháč - head of CRYPTO-team ■ Mgr. Libor Cáha PhD, member of CRYPTO-team Be. Henrieta Michelová, member of CRYPTO-team ■ Be. Roman Oravec, member of CRYPTO-team Teaching loads: lecture - 2 hours, tutorial 2 hours - non obligatory 3/77 Teaching stuff ■ Prof. Jozef Gruska DrSc - lecturer ■ RNDr. Matej Pivoluska PhD - tutorials end CRYPTO team member RNDr Lukáš Boháč - head of CRYPTO-team ■ Mgr. Libor Cáha PhD, member of CRYPTO-team ■ Be. Henrieta Michelová, member of CRYPTO-team ■ Be. Roman Oravec, member of CRYPTO-team Teaching loads: lecture - 2 hours, tutorial 2 hours - non obligatory Languages: lecture - English, tutorials 1 in English and 1 in Czech-Slovak 3/77 IV054-CRYPTO team; Teaching stuff ■ Prof. Jozef Gruska DrSc - lecturer ■ RNDr. Matej Pivoluska PhD - tutorials end CRYPTO team member ■ RNDr Lukáš Boháč - head of CRYPTO-team ■ Mgr. Libor Cáha PhD, member of CRYPTO-team ■ Be. Henrieta Michelová, member of CRYPTO-team ■ Be. Roman Oravec, member of CRYPTO-team Teaching loads: lecture - 2 hours, tutorial 2 hours - non obligatory Languages: lecture - English, tutorials 1 in English and 1 in Czech-Slovak Prerequisites: Basics of discrete mathematics and linear algebra See: "Appendix" in http: //www.fi.muni.cz/usr/gruska/crypto21, IV054 1. Basics of coding theory and linear codes 3/77 IV054 - Homeworks and exams Homeworks 5-6 sets of homeworks of 6-8 exercises designed and evaluated by our CRYPTO-team created mainly from some of best former IV054-students IV054 1. Basics of coding theory and linear codes 4/77 IV054 - Homeworks and exams Homeworks 5-6 sets of homeworks of 6-8 exercises designed and evaluated by our CRYPTO-team created mainly from some of best former IV054-students Termination of the course - Exams or zapocty IV054 1. Basics of coding theory and linear codes 4/77 IV054 - Homeworks and exams Homeworks 5-6 sets of homeworks of 6-8 exercises designed and evaluated by our CRYPTO-team created mainly from some of best former IV054-students Termination of the course - Exams or zapocty Each student will get 5 questions. Number of questions a student has to respond will depend on the number of points received for homeworks. Each student will get automatically A in case (s)he received number of points from exercises >= 85% of MAX - maximal number of points any student got from exercises. Automatically a student gets B, with an easy way to get A, in case the number of points (s)he received is in interval (75,85)% of MAX....... IV054 1. Basics of coding theory and linear codes 4/77 IV054 - Homeworks and exams Homeworks 5-6 sets of homeworks of 6-8 exercises designed and evaluated by our CRYPTO-team created mainly from some of best former IV054-students Termination of the course - Exams or zapocty Each student will get 5 questions. Number of questions a student has to respond will depend on the number of points received for homeworks. Each student will get automatically A in case (s)he received number of points from exercises >= 85% of MAX - maximal number of points any student got from exercises. Automatically a student gets B, with an easy way to get A, in case the number of points (s)he received is in interval (75,85)% of MAX....... Teaching materials IV054 1. Basics of coding theory and linear codes 4/77 IV054 - Homeworks and exams Homeworks 5-6 sets of homeworks of 6-8 exercises designed and evaluated by our CRYPTO-team created mainly from some of best former IV054-students Termination of the course - Exams or zapocty Each student will get 5 questions. Number of questions a student has to respond will depend on the number of points received for homeworks. Each student will get automatically A in case (s)he received number of points from exercises >= 85% of MAX - maximal number of points any student got from exercises. Automatically a student gets B, with an easy way to get A, in case the number of points (s)he received is in interval (75,85)% of MAX....... Teaching materials ■ Detailed slides of all lectures. ( Each chapter will consists of a (i) short prologue, (ii) basic materials and an (iii) Appendix -for much demanding students. IV054 1. Basics of coding theory and linear codes 4/77 IV054 - Homeworks and exams Homeworks 5-6 sets of homeworks of 6-8 exercises designed and evaluated by our CRYPTO-team created mainly from some of best former IV054-students Termination of the course - Exams or zapocty Each student will get 5 questions. Number of questions a student has to respond will depend on the number of points received for homeworks. Each student will get automatically A in case (s)he received number of points from exercises >= 85% of MAX - maximal number of points any student got from exercises. Automatically a student gets B, with an easy way to get A, in case the number of points (s)he received is in interval (75,85)% of MAX....... Teaching materials ■ Detailed slides of all lectures. ( Each chapter will consists of a (i) short prologue, (ii) basic materials and an (iii) Appendix -for much demanding students. ■ Appendix of fundamental discrete math and linear algebra - 45 pages ■ Two lecture notes of solved examples (at least 100 in each one) and short (2-3) pages overviews for all chapters. ■ Posted solutions of homeworks IV054 1. Basics of coding theory and linear codes 4/77 Goals 1. To learn beautiful and powerful basics of the coding theory and of the classical as well as quantum modern cryptography and steganography-watermarking needed for all informaticians; in almost all areas of informatics and for transmission and storing information. IV054 1. Basics of coding theory and linear codes 5/77 Goals 1. To learn beautiful and powerful basics of the coding theory and of the classical as well as quantum modern cryptography and steganography-watermarking needed for all informaticians; in almost all areas of informatics and for transmission and storing information. 2. To verify, for ambitious students, their capability to work hard to be successful in very competitive informatics+mathematics environments. IV054 1. Basics of coding theory and linear codes 5/77 ■ J. Gruska: Foundation of computing. Thomson International Computer Press, 1997 ■ V. Pless: Introduction to the theory of error correcting codes, John Willey, 1998 ■ A. De Vos: Reversible Computing, Viley, VCH Verlg, 2010, 249 p. ■ W. Trape, L. Washington: Introduction to cryptography with coding theory D.R. Stinson: Cryptography: Theory and practice, CRC Press, 1995 ■ A. Salomaa: Public-key cryptography, Springer, 1990 ■ B. Schneier: Applied cryptography, John Willey and Son, 1996 ■ J. Hoffsten, J. Peper, J. Silveman: An introuction to Mathematial cryptography (elypti curves), Springer, 2008 ■ I. J. Cox: Digital Watermarking and Steganography, Morgan Kufman eries in Multimedia Information and Systems, 2008 ■ J. Gruska: Quantum computing, McGraw Hill, 1999,430 pages ■ M. A. Nielsen, I. I. Chuang: Quantum computtion and Quantum Information Cambridge University Press, 2000, 673 p. ■ D. Kahn: The codebreker. Two tory of secret wrung, Mcmilan, 1996 (An alternative and informative hitory of cryptography.) IV054 1. Basics of coding theory and linear codes 6/77 CONTENS IV054 1. Basics of coding theory and linear codes 7/77 Beautiful and much applied coding theory - efficiency and miniaturization of modern information transition systems depends much on the quality and efficiency of the underlying encoding and decoding systems. IV054 1. Basics of coding theory and linear codes 7/77 Beautiful and much applied coding theory - efficiency and miniaturization of modern information transition systems depends much on the quality and efficiency of the underlying encoding and decoding systems. Basic encryption systems and decryption methods of the classical, secret and public key cryptography. Blocks and streams encryption and decryption systems and methods. IV054 1. Basics of coding theory and linear codes 7/77 Beautiful and much applied coding theory - efficiency and miniaturization of modern information transition systems depends much on the quality and efficiency of the underlying encoding and decoding systems. Basic encryption systems and decryption methods of the classical, secret and public key cryptography. Blocks and streams encryption and decryption systems and methods. Digital signatures. Authentication protocols, privacy preservation and secret sharing methods. Basics and applications of such primitives as cryptographical hash functions, pseudorandomness, and elliptic curves. IV054 1. Basics of coding theory and linear codes 7/77 Beautiful and much applied coding theory - efficiency and miniaturization of modern information transition systems depends much on the quality and efficiency of the underlying encoding and decoding systems. Basic encryption systems and decryption methods of the classical, secret and public key cryptography. Blocks and streams encryption and decryption systems and methods. Digital signatures. Authentication protocols, privacy preservation and secret sharing methods. Basics and applications of such primitives as cryptographical hash functions, pseudorandomness, and elliptic curves. Fundamental crypto protocols and zero-knowledge protocols as well as probabilistic proofs as some highlights of the fundamentals of informatics. IV054 1. Basics of coding theory and linear codes 7/77 Beautiful and much applied coding theory - efficiency and miniaturization of modern information transition systems depends much on the quality and efficiency of the underlying encoding and decoding systems. Basic encryption systems and decryption methods of the classical, secret and public key cryptography. Blocks and streams encryption and decryption systems and methods. Digital signatures. Authentication protocols, privacy preservation and secret sharing methods. Basics and applications of such primitives as cryptographical hash functions, pseudorandomness, and elliptic curves. Fundamental crypto protocols and zero-knowledge protocols as well as probabilistic proofs as some highlights of the fundamentals of informatics. Steganography and watermarkinga as key information hiding and discovery methods -for a huge variety of applications. IV054 1. Basics of coding theory and linear codes 7/77 Beautiful and much applied coding theory - efficiency and miniaturization of modern information transition systems depends much on the quality and efficiency of the underlying encoding and decoding systems. Basic encryption systems and decryption methods of the classical, secret and public key cryptography. Blocks and streams encryption and decryption systems and methods. Digital signatures. Authentication protocols, privacy preservation and secret sharing methods. Basics and applications of such primitives as cryptographical hash functions, pseudorandomness, and elliptic curves. Fundamental crypto protocols and zero-knowledge protocols as well as probabilistic proofs as some highlights of the fundamentals of informatics. Steganography and watermarkinga as key information hiding and discovery methods -for a huge variety of applications. Fundamentals of quantum information transmission and Cryptography. Surprising and even shocking practical applications of quantum information transmission and cryptography. Top current cryptosystem for applications. Comment: Concerning both lectures and homeworks the overall requirement for students will be significantly smaller than in previous years. IV054 1. Basics of coding theory and linear codes 7/77 Codes basics and linear codes Basics of coding theory and an introduction to linear codes IV054 1. Basics of coding theory and linear codes 8/77 IV054 1. Basics of coding theory and linear codes 9/77 ROSETTA SPACECRAFT ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started IV054 1. Basics of coding theory and linear cod ROSETTA SPACECRAFT ■ In 1993 in Europe Rosetta spacecraft project started. ■ In 2004 Rosetta spacecraft was launched. IV054 1. Basics of coding theory and linear codes ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started. In 2004 Rosetta spacecraft was launched. In August 2015 Rosetta spacecraft got on the orbit of the comet 67P ((4.3 x 4.11 of its size) one of 4000 known comets of the solar systems) and sent to earth a lot of photos of 67P. IV054 1. Basics of coding theory and linear codes 10/77 ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started. In 2004 Rosetta spacecraft was launched. In August 2015 Rosetta spacecraft got on the orbit of the comet 67P ((4.3 x 4.11 of its size) one of 4000 known comets of the solar systems) and sent to earth a lot of photos of 67P. In spite of the fact that the comet 67P is 720 millions of kilometers from the earth IV054 1. Basics of coding theory and linear codes 10/77 ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started. In 2004 Rosetta spacecraft was launched. In August 2015 Rosetta spacecraft got on the orbit of the comet 67P ((4.3 x 4.11 of its size) one of 4000 known comets of the solar systems) and sent to earth a lot of photos of 67P. In spite of the fact that the comet 67P is 720 millions of kilometers from the earth and there is a lot of noise for signals on the way IV054 1. Basics of coding theory and linear codes 10/77 ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started. In 2004 Rosetta spacecraft was launched. In August 2015 Rosetta spacecraft got on the orbit of the comet 67P ((4.3 x 4.11 of its size) one of 4000 known comets of the solar systems) and sent to earth a lot of photos of 67P. In spite of the fact that the comet 67P is 720 millions of kilometers from the earth and there is a lot of noise for signals on the way encoding of photos arrived in such a form that they could be decoded to get excellent photos of the comet. IV054 1. Basics of coding theory and linear codes 10/77 ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started. In 2004 Rosetta spacecraft was launched. In August 2015 Rosetta spacecraft got on the orbit of the comet 67P ((4.3 x 4.11 of its size) one of 4000 known comets of the solar systems) and sent to earth a lot of photos of 67P. In spite of the fact that the comet 67P is 720 millions of kilometers from the earth and there is a lot of noise for signals on the way encoding of photos arrived in such a form that they could be decoded to get excellent photos of the comet. All that was, to the large extent, due to the enormously high level coding theory already had in 1993. IV054 1. Basics of coding theory and linear codes 10/77 ROSETTA SPACECRAFT In 1993 in Europe Rosetta spacecraft project started. In 2004 Rosetta spacecraft was launched. In August 2015 Rosetta spacecraft got on the orbit of the comet 67P ((4.3 x 4.11 of its size) one of 4000 known comets of the solar systems) and sent to earth a lot of photos of 67P. In spite of the fact that the comet 67P is 720 millions of kilometers from the earth and there is a lot of noise for signals on the way encoding of photos arrived in such a form that they could be decoded to get excellent photos of the comet. All that was, to the large extent, due to the enormously high level coding theory already had in 1993. Since that time coding theory has made another enormous progress that has allowed, among other things, almost perfect mobile communication and transmission of music in time and space. IV054 1. Basics of coding theory and linear codes 10/77 IV054 1. Basics of coding theory and linear codes 11/77 IV054 1. Basics of coding theory and linear codes 12/77 CHAPTER 1: BASICS of CODING THEORY CHAPTER 1: BASICS of CODING THEORY IV054 1. Basics of coding theory and linear codes 14/77 CHAPTER 1: BASICS of CODING THEORY ABSTRACT Coding theory - theory of error correcting codes - is one of the most interesting and applied part of informatics. IV054 1. Basics of coding theory and linear codes 15/77 CHAPTER 1: BASICS of CODING THEORY ABSTRACT Coding theory - theory of error correcting codes - is one of the most interesting and applied part of informatics. Goals of coding theory are to develop systems and methods that allow to detect/correct errors caused when information is transmitted through noisy channels. IV054 1. Basics of coding theory and linear codes 15/77 CHAPTER 1: BASICS of CODING THEORY ABSTRACT Coding theory - theory of error correcting codes - is one of the most interesting and applied part of informatics. Goals of coding theory are to develop systems and methods that allow to detect/correct errors caused when information is transmitted through noisy channels. All real communication systems that work with digitally represented data, as CD players, TV, fax machines, internet, satellites, mobiles, require to use error correcting codes because all real channels are, to some extent, noisy - due to various interference/destruction caused by the environment IV054 1. Basics of coding theory and linear codes 15/77 CHAPTER 1: BASICS of CODING THEORY ABSTRACT Coding theory - theory of error correcting codes - is one of the most interesting and applied part of informatics. Goals of coding theory are to develop systems and methods that allow to detect/correct errors caused when information is transmitted through noisy channels. All real communication systems that work with digitally represented data, as CD players, TV, fax machines, internet, satellites, mobiles, require to use error correcting codes because all real channels are, to some extent, noisy - due to various interference/destruction caused by the environment ■ Coding theory problems are therefore among the very basic and most frequent IV054 1. Basics of coding theory and linear codes 15/77 INFORMATION PROLOGUE - II. IV054 1. Basics of coding theory and linear codes 16/77 INFORMATION INFORMATION IV054 1. Basics of coding theory and linear codes INFORMATION INFORMATION is often an important and very valuable commodity. IV054 1. Basics of coding theory and linear codes 17/77 INFORMATION INFORMATION is often an important and very valuable commodity. This lecture is about how to protect or even hide information INFORMATION is often an important and very valuable commodity. This lecture is about how to protect or even hide information against noise or even unintended user, IV054 1. Basics of coding theory and linear codes 17/77 INFORMATION INFORMATION is often an important and very valuable commodity. This lecture is about how to protect or even hide information against noise or even unintended user, using mainly classical, but also quantum tools. IV054 1. Basics of coding theory and linear codes 17/77 CODING THEORY - BASIC CONCEPTS IV054 1. Basics of coding theory and linear codes 18/77 CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) IV054 1. Basics of coding theory and linear codes 18/77 CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Error correcting framework IV054 1. Basics of coding theory and linear codes 18/77 CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Error correcting framework Example message YES or NO YES Encoding YES-00000 NO -11111 00000 >01001 Decoding 01001 00000 YES user IV054 1. Basics of coding theory and linear codes CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Error correcting framework Example message YES or NO YES Encoding YES-00000 NO -11111 00000 >01001 Decoding 01001 00000 YES user A code C over an alphabet U is a nonempty subset of i7*(C C Z1*). IV054 1. Basics of coding theory and linear codes CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Error correcting framework Example message YES or NO YES Encoding YES-00000 NO -11111 00000 >01001 Decoding 01001 00000 YES user A code C over an alphabet U is a nonempty subset of i7*(C C Z1*). A q-nary code is a code over an alphabet of q-symbols. IV054 1. Basics of coding theory and linear codes CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Error correcting framework Example message YES or NO YES Encoding YES-00000 NO -11111 00000 >01001 Decoding 01001 00000 YES user A code C over an alphabet U is a nonempty subset of i7*(C C Z1*). A q-nary code is a code over an alphabet of q-symbols. A binary code is a code over the alphabet {0,1}. IV054 1. Basics of coding theory and linear codes CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Example Error correcting framework message YES or NO YES Encoding YES-00000 NO -11111 00000 >01001 Decoding 01001 00000 YES user A code C over an alphabet U is a nonempty subset of i7*(C C Z1*). A q-nary code is a code over an alphabet of q-symbols. A binary code is a code over the alphabet {0,1}. Examples of codes CI = {00,01,10,11} C2 = {000,010,101,100} IV054 1. Basics of coding theory and linear codes CODING THEORY - BASIC CONCEPTS Error-correcting codes are used to correct messages when they are (erroneously) transmitted through noisy channels. channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) Example Error correcting framework message YES or NO YES Encoding YES-00000 NO -11111 00000 >01001 Decoding 01001 00000 YES user A code C over an alphabet U is a nonempty subset of i7*(C C Z1*). A q-nary code is a code over an alphabet of q-symbols. A binary code is a code over the alphabet {0,1}. Examples of codes CI = {00,01,10,11} C2 = {000,010,101,100} C3 = {00000,01101,10111,11011} IV054 1. Basics of coding theory and linear codes CHANNELS is any physical medium in which information is stored or through which information is transmitted. 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... TRANSMISSION GOALS IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... TRANSMISSION GOALS □ Encoding of information should be very fast. IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... TRANSMISSION GOALS □ Encoding of information should be very fast. Q Very similar messages should be encoded very differently. IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... TRANSMISSION GOALS □ Encoding of information should be very fast. Q Very similar messages should be encoded very differently. Q Transmission of encoded messages should be very easy. IV054 1. Basics of coding theory and linear codes 19/77 is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) NOISE may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... TRANSMISSION GOALS □ Encoding of information should be very fast. Q Very similar messages should be encoded very differently. Q Transmission of encoded messages should be very easy. □ Decoding of received messages should be very easy. IV054 1. Basics of coding theory and linear codes 19/77 CHANNELS is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... □ Encoding of information should be very fast. Q Very similar messages should be encoded very differently. Q Transmission of encoded messages should be very easy. □ Decoding of received messages should be very easy. H Corection of errors introduced in the channel should be reasonably easy. NOISE TRANSMISSION GOALS IV054 1. Basics of coding theory and linear codes 19/77 CHANNELS is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... □ Encoding of information should be very fast. B Very similar messages should be encoded very differently. B Transmission of encoded messages should be very easy. □ Decoding of received messages should be very easy. B Corection of errors introduced in the channel should be reasonably easy. 0 As large as possible amount of information should be transferred reliably per a time NOISE TRANSMISSION GOALS unit. 19/77 CHANNELS is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... □ Encoding of information should be very fast. Q Very similar messages should be encoded very differently. Q Transmission of encoded messages should be very easy. □ Decoding of received messages should be very easy. H Corection of errors introduced in the channel should be reasonably easy. 0 As large as possible amount of information should be transferred reliably per a time NOISE TRANSMISSION GOALS unit. BASIC METHOD OF FIGHTING ERRORS: REDUNDANCY!!! 19/77 CHANNELS is any physical medium in which information is stored or through which information is transmitted. (Telephone lines, optical fibres and also the atmosphere are examples of channels.) may be caused by sunspots, lighting, meteor showers, random radio disturbances, poor typing, poor hearing, .... □ Encoding of information should be very fast. Q Very similar messages should be encoded very differently. Q Transmission of encoded messages should be very easy. □ Decoding of received messages should be very easy. H Corection of errors introduced in the channel should be reasonably easy. 0 As large as possible amount of information should be transferred reliably per a time unit. BASIC METHOD OF FIGHTING ERRORS: REDUNDANCY!!! Example: 0 is encoded as 00000 and 1 is encoded as 11111. NOISE TRANSMISSION GOALS IV054 1. Basics of coding theory and linear codes 19/77 CHANNELS - MAIN TYPES IV054 1. Basics of coding theory and linear codes 20/77 CHANNELS - MAIN TYPES Discrete channels and continuous channels are main types of channels. IV054 1. Basics of coding theory and linear codes 20/77 CHANNELS - MAIN TYPES Discrete channels and continuous channels are main types of channels. With an example of continuous channels we will deal in Chapter 2. Main model of the noise in discrete channels is: 20/77 CHANNELS - MAIN TYPES Discrete channels and continuous channels are main types of channels. With an example of continuous channels we will deal in Chapter 2. Main model of the noise in discrete channels is: ■ Shannon stochastic (probabilistic) noise model: Probability Pr(y|x), for any output y and input x) is given that output is y in case input is x, IV054 1. Basics of coding theory and linear codes 20/77 CHANNELS - MAIN TYPES Discrete channels and continuous channels are main types of channels. With an example of continuous channels we will deal in Chapter 2. Main model of the noise in discrete channels is: ■ Shannon stochastic (probabilistic) noise model: Probability Pr(y|x), for any output y and input x) is given that output is y in case input is x,, and in additionthe probability of too many errors is low. IV054 1. Basics of coding theory and linear codes 20/77 CHANNELS - MAIN TYPES Discrete channels and continuous channels are main types of channels. With an example of continuous channels we will deal in Chapter 2. Main model of the noise in discrete channels is: ■ Shannon stochastic (probabilistic) noise model: Probability Pr(y|x), for any output y and input x) is given that output is y in case input is x,, and in additionthe probability of too many errors is low. IV054 1. Basics of coding theory and linear codes 20/77 DISCRETE CHANNELS - MATHEMATICAL VIEWS IV054 1. Basics of coding theory and linear codes DISCRETE CHANNELS - MATHEMATICAL VIEWS Formally, a discrete Shannon stochastic channel is described by a triple C = (Z,Q,p), where ■ Z is an input alphabet ■ Q is an output alphabet ■ Pr is a probability distribution on IxQ and for each / G I, o G Q, PrV, °) >s the probability that the output of the channel is o if the input is /. IV054 1. Basics of coding theory and linear codes 21/77 DISCRETE CHANNELS - MATHEMATICAL VIEWS Formally, a discrete Shannon stochastic channel is described by a triple C = (Z,Q,p), where ■ Z is an input alphabet ■ Q is an output alphabet ■ Pr is a probability distribution on IxQ and for each / G I, o G Q, PrV, °) >s the probability that the output of the channel is o if the input is /. IMPORTANT CHANNELS IV054 1. Basics of coding theory and linear codes 21/77 DISCRETE CHANNELS - MATHEMATICAL VIEWS Formally, a discrete Shannon stochastic channel is described by a triple C = (Z,Q,p), where ■ Z is an input alphabet ■ Q is an output alphabet ■ Pr is a probability distribution on IxQ and for each / G I, o G Q, PrV, °) >s the probability that the output of the channel is o if the input is /. IMPORTANT CHANNELS ■ Binary symmetric channel maps, with fixed probability po, each binary input into the opposite one. Hence, Pr(0,1) = Pr(l,0) = po and Pr(0,0) = Pr(l,l) = l-p0. IV054 1. Basics of coding theory and linear codes 21/77 DISCRETE CHANNELS - MATHEMATICAL VIEWS Formally, a discrete Shannon stochastic channel is described by a triple C = (Z,Q,p), where ■ Z is an input alphabet ■ Q is an output alphabet ■ Pr is a probability distribution on IxQ and for each / G I, o G Q, PrV, °) >s the probability that the output of the channel is o if the input is /. IMPORTANT CHANNELS ■ Binary symmetric channel maps, with fixed probability po, each binary input into the opposite one. Hence, Pr(0,1) = Pr(l,0) = po and Pr(0,0) = Pr(l,l) = l-p0. ■ Binary erasure channel maps, with fixed probability po, binary inputs into {0, l,e}, where e is so called the erasure symbol, and Pr(0,0) = Pr(l, 1) = po, Pr(0,e) = Pr(l,e) = 1 - p0. IV054 1. Basics of coding theory and linear codes 21/77 DISCRETE CHANNELS - MATHEMATICAL VIEWS Formally, a discrete Shannon stochastic channel is described by a triple C = (Z,Q,p), where ■ Z is an input alphabet ■ Q is an output alphabet ■ Pr is a probability distribution on IxQ and for each / G I, o G Q, PrV, °) >s the probability that the output of the channel is o if the input is /. IMPORTANT CHANNELS ■ Binary symmetric channel maps, with fixed probability po, each binary input into the opposite one. Hence, Pr(0,1) = Pr(l,0) = po and Pr(0,0) = Pr(l,l) = l-p0. ■ Binary erasure channel maps, with fixed probability po, binary inputs into {0, l,e}, where e is so called the erasure symbol, and Pr(0,0) = Pr(l, 1) = po, Pr(0,e) = Pr(l,e) = 1 - p0. ■ White noise Gaussian channel that models errors in the deep space. IV054 1. Basics of coding theory and linear codes 21/77 BASIC CHANNEL CODING PROBLEMS Summary: The task of a communication channel coding is to encode the information to be sent over the channel in such a way that even in the presence of some channel noise, several (or a specific number of) errors can be detected and/or corrected. IV054 1. Basics of coding theory and linear codes 22/77 BASIC CHANNEL CODING PROBLEMS Summary: The task of a communication channel coding is to encode the information to be sent over the channel in such a way that even in the presence of some channel noise, several (or a specific number of) errors can be detected and/or corrected. There are two basic coding methods BEC (Bawkwarda) Err or Cerection Coding allows the receiver only to detect errors. If an error is detected, then the sender is requested to re transmit the message.] IV054 1. Basics of coding theory and linear codes 22/77 BASIC CHANNEL CODING PROBLEMS Summary: The task of a communication channel coding is to encode the information to be sent over the channel in such a way that even in the presence of some channel noise, several (or a specific number of) errors can be detected and/or corrected. There are two basic coding methods (Bawkwarda) Err or Cerection Coding allows the receiver only to detect errors. If an error is detected, then the sender is requested to re transmit the message.] DEC (Forward Error Cerection) Coding] allows the receiver to correct a certain amount of IV054 1. Basics of coding theory and linear codes 22/77 IMPORTANCE of ERROR-CORRECTING CODES for CRYPTOGRAPHY In a good cryptosystem a change of a single bit of the cryptotext should change so many bits of the plaintext obtained from the cryptotext that the plaintext gets incomprehensible. 23/77 IMPORTANCE of ERROR-CORRECTING CODES for CRYPTOGRAPHY In a good cryptosystem a change of a single bit of the cryptotext should change so many bits of the plaintext obtained from the cryptotext that the plaintext gets incomprehensible. Methods to detect and correct errors when cryptotexts are transmitted are therefore much needed. 23/77 IMPORTANCE of ERROR-CORRECTING CODES for CRYPTOGRAPHY In a good cryptosystem a change of a single bit of the cryptotext should change so many bits of the plaintext obtained from the cryptotext that the plaintext gets incomprehensible. Methods to detect and correct errors when cryptotexts are transmitted are therefore much needed. Also many non-cryptography applications require error-correcting codes. For example, mobiles, CD-players,... IV054 1. Basics of coding theory and linear codes 23/77 WHY WE NEED TO KEEP IMPROVING ERROR-CORRECTING CODES When error correcting capabilities of some code are improved - that is a better code is found - this has the following impacts: IV054 1. Basics of coding theory and linear codes 24/77 WHY WE NEED TO KEEP IMPROVING ERROR-CORRECTING CODES When error correcting capabilities of some code are improved - that is a better code is found - this has the following impacts: ■ For the same quality of the received information, it is possible to achieve that the transmission system operates in more severe conditions; IV054 1. Basics of coding theory and linear codes 24/77 WHY WE NEED TO KEEP IMPROVING ERROR-CORRECTING CODES When error correcting capabilities of some code are improved - that is a better code is found - this has the following impacts: ■ For the same quality of the received information, it is possible to achieve that the transmission system operates in more severe conditions; ■ For example; □ It is possible to reduce the size of antennas or solar panels and the weight of batteries IV054 1. Basics of coding theory and linear codes 24/77 WHY WE NEED TO KEEP IMPROVING ERROR-CORRECTING CODES When error correcting capabilities of some code are improved - that is a better code is found - this has the following impacts: ■ For the same quality of the received information, it is possible to achieve that the transmission system operates in more severe conditions; ■ For example; □ It is possible to reduce the size of antennas or solar panels and the weight of batteries Q In the space travel systems such savings can be measured in hundred of thousands of dollars; IV054 1. Basics of coding theory and linear codes 24/77 WHY WE NEED TO KEEP IMPROVING ERROR-CORRECTING CODES When error correcting capabilities of some code are improved - that is a better code is found - this has the following impacts: ■ For the same quality of the received information, it is possible to achieve that the transmission system operates in more severe conditions; ■ For example; □ It is possible to reduce the size of antennas or solar panels and the weight of batteries B In the space travel systems such savings can be measured in hundred of thousands of dollars; B In mobile telephone systems, improving the code enables the operators to increase the potential number of users in each cell. IV054 1. Basics of coding theory and linear codes 24/77 WHY WE NEED TO KEEP IMPROVING ERROR-CORRECTING CODES When error correcting capabilities of some code are improved - that is a better code is found - this has the following impacts: ■ For the same quality of the received information, it is possible to achieve that the transmission system operates in more severe conditions; ■ For example; □ It is possible to reduce the size of antennas or solar panels and the weight of batteries Q In the space travel systems such savings can be measured in hundred of thousands of dollars; B In mobile telephone systems, improving the code enables the operators to increase the potential number of users in each cell. ■ Another field of applications of error-correcting codes is that of mass memories: computer hard drives, CD-Rooms, DVDs and so on. IV054 1. Basics of coding theory and linear codes 24/77 REDUNDANCY - BASIC IDEA of ERROR CORRECTION IV054 1. Basics of coding theory and linear codes 25/77 REDUNDANCY - BASIC IDEA of ERROR CORRECTION Details of the techniques used to protect information against noise in practice are sometimes rather complicated, but basic principles are mostly easily understood. IV054 1. Basics of coding theory and linear codes 25/77 REDUNDANCY - BASIC IDEA of ERROR CORRECTION Details of the techniques used to protect information against noise in practice are sometimes rather complicated, but basic principles are mostly easily understood. The key idea is that in order to protect a message against a noise, we should encode the message by adding some redundant informationto the message. IV054 1. Basics of coding theory and linear codes 25/77 REDUNDANCY - BASIC IDEA of ERROR CORRECTION Details of the techniques used to protect information against noise in practice are sometimes rather complicated, but basic principles are mostly easily understood. The key idea is that in order to protect a message against a noise, we should encode the message by adding some redundant informationto the message. This should be done in such a way that even if the message is corrupted by a noise, there will be enough redundancy in the encoded message to recover, or to decode the message completely. IV054 1. Basics of coding theory and linear codes 25/77 MAJORITY VOTING DECODING - BASIC IDEA The basic idea of so called majority voting decoding/principle or of maximal likelihood decoding/principle, when a code C is used, is 26/77 MAJORITY VOTING DECODING - BASIC IDEA The basic idea of so called majority voting decoding/principle or of maximal likelihood decoding/principle, when a code C is used, is to decode a received message w' by a codeword w that is the closest codeword to w IV054 1. Basics of coding theory and linear codes 26/77 MAJORITY VOTING DECODING - BASIC IDEA The basic idea of so called majority voting decoding/principle or of maximal likelihood decoding/principle, when a code C is used, is to decode a received message w' by a codeword w that is the closest codeword to w' in the whole set of the codewords of the given code C. IV054 1. Basics of coding theory and linear codes 26/77 EXAMPLE IV054 1. Basics of coding theory and linear codes 27/77 EXAMPLE In case: (a) the encoding 0 ^ 000 1 111, is used, IV054 1. Basics of coding theory and linear codes 27/77 EXAMPLE In case: (a) the encoding 0 ^ 000 1 111, is used, (b) the probability of the bit error is p < \ and, IV054 1. Basics of coding theory and linear codes 27/77 EXAMPLE In case: (a) the encoding 0 ^ 000 1 -> 111, is used, (b) the probability of the bit error is p < \ and, (c) the following majority voting decoding 000, 001,010,100 -> 000 and 111, 110,101,011 -> 111 is used, IV054 1. Basics of coding theory and linear codes 27/77 In case: (a) the encoding 0 ^ 000 1 111, is used, (b) the probability of the bit error is p < \ and, (c) the following majority voting decoding 000, 001,010,100 000 and 111, 110,101,011 111 is used, then the probability of an erroneous decoding (for the case of 2 or 3 errors) is 3p2(l-p) + p3 = 3p2-2p3

s + 1. IV054 1. Basics of coding theory and linear codes 30/77 HAMMING DISTANCE The intuitive concept of "closeness" of two words is well formalized through Hamming distance /?(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols in which the words x and y differ. Example: /?(10101, 01100) = 3, h(fourth, eighth) = 4 Properties of th Hamming distance □ /?(x,y) = 0 x = y H h(x,y) = h(y,x) H /?(x, z) < /?(x,y) + /?(y, z) triangle inequality An important parameter of codes C is their minimal distance. h(C) = min{h(x,y) | x,y e C,x ^ y}, Therefore, h(C) is the smallest number of errors that can change one codeword into another. Basic error correcting theorem □ A code C can detect up to s errors if h(C) > s + 1. Q A code C can correct up to t errors if h(C) > 2t + 1. IV054 1. Basics of coding theory and linear codes 30/77 HAMMING DISTANCE The intuitive concept of "closeness" of two words is well formalized through Hamming distance /?(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols in which the words x and y differ. Example: /?(10101, 01100) = 3, h(fourth, eighth) = 4 Properties of th Hamming distance □ /?(x,y) = 0 x = y H h(x,y) = h(y,x) H /?(x, z) < /?(x,y) + /?(y, z) triangle inequality An important parameter of codes C is their minimal distance. h(C) = min{h(x,y) | x,y e C,x ^ y}, Therefore, h(C) is the smallest number of errors that can change one codeword into another. Basic error correcting theorem □ A code C can detect up to s errors if h(C) > s + 1. Q A code C can correct up to t errors if h(C) > 2t + 1. Proof (1) Trivial. IV054 1. Basics of coding theory and linear codes 30/77 HAMMING DISTANCE The intuitive concept of "closeness" of two words is well formalized through Hamming distance /?(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols in which the words x and y differ. Example: /?(10101, 01100) = 3, h(fourth, eighth) = 4 Properties of th Hamming distance □ /?(x,y) = 0 x = y H h(x,y) = h(y,x) H /?(x, z) < /?(x,y) + /?(y, z) triangle inequality An important parameter of codes C is their minimal distance. h(C) = min{h(x,y) | x,y e C,x ^ y}, Therefore, h(C) is the smallest number of errors that can change one codeword into another. Basic error correcting theorem □ A code C can detect up to s errors if h(C) > s + 1. Q A code C can correct up to t errors if h(C) > 2t + 1. Proof (1) Trivial. (2) Suppose /?(C) > 2t + 1. Let a codeword x is transmitted and a word y is received such that /?(x,y) < t. IV054 1. Basics of coding theory and linear codes 30/77 HAMMING DISTANCE The intuitive concept of "closeness" of two words is well formalized through Hamming distance /?(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols in which the words x and y differ. Example: /?(10101, 01100) = 3, h(fourth, eighth) = 4 Properties of th Hamming distance □ /?(x,y) = 0 x = y H h(x,y) = h(y,x) H /?(x, z) < /?(x,y) + /?(y, z) triangle inequality An important parameter of codes C is their minimal distance. h(C) = min{h(x,y) | x,y e C,x ^ y}, Therefore, h(C) is the smallest number of errors that can change one codeword into another. Basic error correcting theorem □ A code C can detect up to s errors if h(C) > s + 1. Q A code C can correct up to t errors if h(C) > 2t + 1. Proof (1) Trivial. (2) Suppose /?(C) > 2t + 1. Let a codeword x is transmitted and a word y is received such that /?(x,y) < t. If x ^ x is any codeword, then h(yJx/) > t + 1 because otherwise /?(y, x7) < t + 1 and therefore h(x,xf) < /?(x,y) + ^(y^7) < 2t + 1 IV054 1. Basics of coding theory and linear codes 30/77 HAMMING DISTANCE The intuitive concept of "closeness" of two words is well formalized through Hamming distance /?(x, y) of words x, y. For two words x, y h(x, y) = the number of symbols in which the words x and y differ. Example: /?(10101, 01100) = 3, h(fourth, eighth) = 4 Properties of th Hamming distance □ /?(x,y) = 0 x = y H h(x,y) = h(y,x) H /?(x, z) < /?(x,y) + /?(y, z) triangle inequality An important parameter of codes C is their minimal distance. h(C) = min{h(x,y) | x,y e C,x ^ y}, Therefore, h(C) is the smallest number of errors that can change one codeword into another. Basic error correcting theorem □ A code C can detect up to s errors if h(C) > s + 1. Q A code C can correct up to t errors if h(C) > 2t + 1. Proof (1) Trivial. (2) Suppose /?(C) > 2t + 1. Let a codeword x is transmitted and a word y is received such that /?(x,y) < t. If x ^ x is any codeword, then h(yJx/) > t + 1 because otherwise /?(y, x7) < £ + 1 and therefore h(x,xf) < /?(x,y) + ^(y^7) < 2t + 1 what contradicts the assumption h(C) > 2t + 1. IV054 1. Basics of coding theory and linear codes 30/77 BINARY SYMMETRIC CHANNEL IV054 1. Basics of coding theory and linear codes 31/77 BINARY SYMMETRIC CHANNEL Consider a transition of binary symbols such that each symbol has probability of error P < i 0 l-p 0 Binary symmetric channel 1 !-P '1 IV054 1. Basics of coding theory and linear codes 31/77 BINARY SYMMETRIC CHANNEL Consider a transition of binary symbols such that each symbol has probability of error P < i 0 l-p 0 Binary symmetric channel 1 !-P '1 If n symbols are transmitted, then the probability of t errors is IV054 1. Basics of coding theory and linear codes 31/77 BINARY SYMMETRIC CHANNEL Consider a transition of binary symbols such that each symbol has probability of error P< i l-p Binary symmetric channel l-p If n symbols are transmitted, then the probability of t errors is p'ii-py-'C) IV054 1. Basics of coding theory and linear codes 31/77 BINARY SYMMETRIC CHANNEL Consider a transition of binary symbols such that each symbol has probability of error P< i l-p Binary symmetric channel l-p If n symbols are transmitted, then the probability of t errors is p'ii-py-'C) In the case of binary symmetric channels, the "nearest neighbour decoding strategy" is also "maximum likelihood decoding strategy". IV054 1. Basics of coding theory and linear codes 31/77 BINARY SYMMETRIC CHANNEL Consider a transition of binary symbols such that each symbol has probability of error P< i l-p Binary symmetric channel l-p If n symbols are transmitted, then the probability of t errors is p'ii-py-'C) In the case of binary symmetric channels, the "nearest neighbour decoding strategy" is also "maximum likelihood decoding strategy". IV054 1. Basics of coding theory and linear codes 31/77 SURPRISING POWER of PARITY BITS IV054 1. Basics of coding theory and linear codes 32/77 SURPRISING POWER of PARITY BITS Example Let all 2 of binary words of length 11 be codewords IV054 1. Basics of coding theory and linear codes SURPRISING POWER of PARITY BITS Example Let all 2 of binary words of length 11 be codewords and let the probability of a bit error be p = 10-8. SURPRISING POWER of PARITY BITS Example Let all 211 of binary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. IV054 1. Basics of coding theory and linear codes SURPRISING POWER of PARITY BITS Example Let all 2 of binary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approxim np(i-p)10«^. IV054 1. Basics of coding theory and linear codes SURPRISING POWER of PARITY BITS Example Let all 211 of binary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately np(i - p) 10 _ 11 108 Therefore ^ 101 li = 0.1 of words per second are transmitted incorrectly. IV054 1. Basics of coding theory and linear codes SURPRISING POWER of PARITY BITS Example Let all 211 of binary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately np(i - p) 10 _ 11 108 Therefore ^ 101 li = 0.1 of words per second are transmitted incorrectly. Therefore, one wrong word is transmitted every 10 seconds, 360 erroneous words hour and 8640 words every day without being detected! IV054 1. Basics of coding theory and linear codes SURPRISING POWER of PARITY BITS txam pie Let all 211 of b inary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately np(i-p)10« Therefore ^ • ^ = 0.1 of words per second are transmitted incorrectly. Therefore, one wrong word is transmitted every 10 seconds, 360 erroneous words every hour and 8640 words every day without being detected! Let now one parity bit be added. IV054 1. Basics of coding theory and linear codes 32/77 SURPRISING POWER of PARITY BITS txam pie Let all 211 of b inary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately iip(i-P)io« Therefore ^ • ^ = 0.1 of words per second are transmitted incorrectly. Therefore, one wrong word is transmitted every 10 seconds, 360 erroneous words every hour and 8640 words every day without being detected! Let now one parity bit be added. Any single error can be detected!!! IV054 1. Basics of coding theory and linear codes 32/77 SURPRISING POWER of PARITY BITS txam pie Let all 211 of b inary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately np(i-p) 10 _ 11 io8 Therefore ^ • ^ = 0.1 of words per second are transmitted incorrectly. Therefore, one wrong word is transmitted every 10 seconds, 360 erroneous words every hour and 8640 words every day without being detected! Let now one parity bit be added. Any single error can be detected!!! The probability of at least two errors is: 1 _ (1 _ p)12 _ 12(1 _ p)Hp « (12) (1 _ p)10p 2 ^ 66 10" IV054 1. Basics of coding theory and linear codes 32/77 SURPRISING POWER of PARITY BITS txam pie Let all 211 of b inary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately np(i-p) 10 _ 11 io8 Therefore ^ • ^ = 0.1 of words per second are transmitted incorrectly. Therefore, one wrong word is transmitted every 10 seconds, 360 erroneous words every hour and 8640 words every day without being detected! Let now one parity bit be added. Any single error can be detected!!! The probability of at least two errors is: 1 _ (1 _ p)12 _ 12(1 _ p)Hp « (12) (1 _ p)10p 10 „2 ^ 66 10" Therefore, approximately • « 5.5 • 10 9 words per second are transmitted with a undetectable error. IV054 1. Basics of coding theory and linear codes 32/77 SURPRISING POWER of PARITY BITS txam pie Let all 211 of b inary words of length 11 be codewords and let the probability of a bit error be p = 10-8. Let bits be transmitted at the rate 107 bits per second. The probability that a word is transmitted incorrectly is approximately np(i-p) 10 _ 11 io8 Therefore ^ • ^ = 0.1 of words per second are transmitted incorrectly. Therefore, one wrong word is transmitted every 10 seconds, 360 erroneous words every hour and 8640 words every day without being detected! Let now one parity bit be added. Any single error can be detected!!! The probability of at least two errors is: 1 _ (1 _ p)12 _ 12(1 _ p)Hp « (12) (1 _ p)10p 10 „2 ^ 66 10" Therefore, approximately • « 5.5 • 10-9 words per second are transmitted with a undetectable error. Corollary One undetected error occurs only once every 2000 days! (2000 « 5 5*g640Q). IV054 1. Basics of coding theory and linear codes 32/77 NOTATIONS and EXAMPLES Notation: An (n, M, c/)-code C is a code such that IV054 1. Basics of coding theory and linear codes 33/77 NOTATIONS and EXAMPLES Notation: An (/i, M, c/)-code C is a code such that ■ n - is the length of codewords. ■ M - is the number of codewords. ■ d - is the minimum distance of two codewords i IV054 1. Basics of coding theory and NOTATIONS and EXAMPLES Notation: An (/i, M, c/)-code C is a code such that ■ n - is the length of codewords. ■ M - is the number of codewords. ■ d - is the minimum distance of two codewords in C. Example: CI = {00,01,10,11} is a (2,4,l)-code. IV054 1. Basics of coding theory and linear codes 33/77 NOTATIONS and EXAMPLES Notation: An (/i, M, c/)-code C is a code such that ■ n - is the length of codewords. ■ M - is the number of codewords. ■ d - is the minimum distance of two codewords in C. Example: CI = {00,01,10,11} is a (2,4,l)-code. C2 = {000,011,101,110} is a (3,4,2)-code. IV054 1. Basics of coding theory and linear codes 33/77 NOTATIONS and EXAMPLES Notation: An (/i, M, c/)-code C is a code such that ■ n - is the length of codewords. ■ M - is the number of codewords. ■ d - is the minimum distance of two codewords in C. Example: CI = {00,01,10,11} is a (2,4,l)-code. C2 = {000,011,101,110} is a (3,4,2)-code. C3 = {00000, 01101,10110,11011} is a (5,4,3)-code. IV054 1. Basics of coding theory and linear cod NOTATIONS and EXAMPLES Notation: An (/i, M, c/)-code C is a code such that ■ n - is the length of codewords. ■ M - is the number of codewords. ■ d - is the minimum distance of two codewords in C. Example: CI = {00,01,10,11} is a (2,4,l)-code. C2 = {000,011,101,110} is a (3,4,2)-code. C3 = {00000, 01101,10110,11011} is a (5,4,3)-code. Comment: A good (/i, M, d)-code has small n, large M and also large d. IV054 1. Basics of coding theory and linear codes 33/77 EXAMPLES from DEEP SPACE TRAVELS IV054 1. Basics of coding theory and linear codes EXAMPLES from DEEP SPACE TRAVELS Examples (Transmission of photographs from the deep space) IV054 1. Basics of coding theory and linear codes EXAMPLES from DEEP SPACE TRAVELS Examples (Transmission of photographs from the deep space) ■ In 1965-69 Mariner 4-5 probes took the first photographs of another planet - 22 photos. Each photo was divided into 200 x 200 elementary squares - pixels. Each pixel was assigned 6 bits representing 64 levels of brightness, and so called Hadamard code was used. IV054 1. Basics of coding theory and linear codes 34/77 EXAMPLES from DEEP SPACE TRAVELS Examples (Transmission of photographs from the deep space) ■ In 1965-69 Mariner 4-5 probes took the first photographs of another planet - 22 photos. Each photo was divided into 200 x 200 elementary squares - pixels. Each pixel was assigned 6 bits representing 64 levels of brightness, and so called Hadamard code was used. Transmission rate was 8.3 bits per second. IV054 1. Basics of coding theory and linear codes 34/77 EXAMPLES from DEEP SPACE TRAVELS Examples (Transmission of photographs from the deep space) ■ In 1965-69 Mariner 4-5 probes took the first photographs of another planet - 22 photos. Each photo was divided into 200 x 200 elementary squares - pixels. Each pixel was assigned 6 bits representing 64 levels of brightness, and so called Hadamard code was used. Transmission rate was 8.3 bits per second. ■ In 1970-72 Mariners 6-8 took such photographs that each picture was broken into 700 x 832 squares. So called Reed-MuHer (32,64,16) code was used. IV054 1. Basics of coding theory and linear codes 34/77 EXAMPLES from DEEP SPACE TRAVELS Examples (Transmission of photographs from the deep space) ■ In 1965-69 Mariner 4-5 probes took the first photographs of another planet - 22 photos. Each photo was divided into 200 x 200 elementary squares - pixels. Each pixel was assigned 6 bits representing 64 levels of brightness, and so called Hadamard code was used. Transmission rate was 8.3 bits per second. ■ In 1970-72 Mariners 6-8 took such photographs that each picture was broken into 700 x 832 squares. So called Reed-MuHer (32,64,16) code was used. Transmission rate was 16200 bits per second. (Much better quality pictures could be received) HADAMARD CODE In Mariner 5, 6-bit pixels were encoded using 32-bit long Hadamard code that could correct up to 7 errors. IV054 1. Basics of coding theory and linear codes 35/77 HADAMARD CODE In Mariner 5, 6-bit pixels were encoded using 32-bit long Hadamard code that could correct up to 7 errors. Hadamard code has 64 codewords. 32 of them are represented by the 32 x 32 matrix H = {hu}, where 0 < /, j < 31 and fa.j — ^_-^^aobo+al^l + ---+a4^4 IV054 1. Basics of coding theory and linear codes 35/77 HADAMARD CODE In Mariner 5, 6-bit pixels were encoded using 32-bit long Hadamard code that could correct up to 7 errors. Hadamard code has 64 codewords. 32 of them are represented by the 32 x 32 matrix H = {hu}, where 0 < /, j < 31 and where i and j have binary representations IV054 1. Basics of coding theory and linear codes 35/77 HADAMARD CODE In Mariner 5, 6-bit pixels were encoded using 32-bit long Hadamard code that could correct up to 7 errors. Hadamard code has 64 codewords. 32 of them are represented by the 32 x 32 matrix H = {hu}, where 0 < /, j < 31 and where i and j have binary representations The remaining 32 codewords are represented by the matrix —H. IV054 1. Basics of coding theory and linear codes 35/77 HADAMARD CODE In Mariner 5, 6-bit pixels were encoded using 32-bit long Hadamard code that could correct up to 7 errors. Hadamard code has 64 codewords. 32 of them are represented by the 32 x 32 matrix H = {hu}, where 0 < /, j < 31 and where i and j have binary representations / = 3433323130, j = 6463626160 The remaining 32 codewords are represented by the matrix —H. Decoding was quite simple. IV054 1. Basics of coding theory and linear codes 35/77 CODES RATES For qr-nary (/i, M, c/)-code C we define the code rate, or information rate, Rc, by Rc = n IV054 1. Basics of coding theory and linear codes 36/77 CODES RATES For qr-nary (/i, M, c/)-code C we define the code rate, or information rate, Rc, by R = IggM The code rate represents the ratio of the number of needed input data symbols to the number of transmitted code symbols. IV054 1. Basics of coding theory and linear codes 36/77 For qr-nary (/i, M, c/)-code C we define the code rate, or information rate, Rc, by The code rate represents the ratio of the number of needed input data symbols to the number of transmitted code symbols. If a q-nary code has code rate R, then we say that it transmits R q-symbols per a channel use - or R is a number of bits per a channel use (box) - in the case of binary alphabet. IV054 1. Basics of coding theory and linear codes 36/77 For qr-nary (/i, M, c/)-code C we define the code rate, or information rate, Rc, by The code rate represents the ratio of the number of needed input data symbols to the number of transmitted code symbols. If a q-nary code has code rate R, then we say that it transmits R q-symbols per a channel use - or R is a number of bits per a channel use (box) - in the case of binary alphabet. Code rate (6/32 for Hadamard code), is an important parameter for real implementations, because it shows what fraction of the communication bandwidth is being used to transmit actual data. IV054 1. Basics of coding theory and linear codes 36/77 The ISBN-code I Each book till 1.1.2007 had linternational Sstandard BOOK Number whi 10-digit codeword produced by the publisher with the following structure IV054 1. Basics of coding theory and linear codes The ISBN-code I. Each book till 1.1.2007 had linternational Sstandard BOOK Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 0 such that IV054 1. Basics of coding theory and linear codes 37/77 The ISBN-code I. Each book till 1.1.2007 had linternational Sstandard BOOK Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 0 such that El=i(n - '>/ = 0 (modll) The publisher has to put xio = X if xio is to be 10. IV054 1. Basics of coding theory and linear codes 37/77 Each book till 1.1.2007 had linternational Sstandard BOOK Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 0 such that £/=i(n - />/ = 0 {modll) The publisher has to put xio = X if xio is to be 10. The ISBN code was designed to detect: (a) any single error (b) any double error created by a transposition IV054 1. Basics of coding theory and linear codes 37/77 EQUIVALENCE of CODES EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. 93 a permutation of symbols appearing in a fixed position. IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. 93 a permutation of symbols appearing in a fixed position. Question: Let a code be displayed as an M x n matrix. To what correspond operations (a) and (b)? IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. 93 a permutation of symbols appearing in a fixed position. Question: Let a code be displayed as an M x n matrix. To what correspond operations (a) and (b)? Claim: Distances between codewords are unchanged by operations (a), (b). Consequently, equivalent codes have the same parameters (n,M,d) (and correct the same number of errors). IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. 93 a permutation of symbols appearing in a fixed position. Question: Let a code be displayed as an M x n matrix. To what correspond operations (a) and (b)? Claim: Distances between codewords are unchanged by operations (a), (b). Consequently, equivalent codes have the same parameters (n,M,d) (and correct the same number of errors). Examples of equivalent codes (1) < (0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 0 0} 1 1 0 > < fo 0 1 1 0 1 0 1 0 1 1 0 0 0 1 1 0} 1 0 1 > IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. 93 a permutation of symbols appearing in a fixed position. Question: Let a code be displayed as an M x n matrix. To what correspond operations (a) and (b)? Claim: Distances between codewords are unchanged by operations (a), (b). Consequently, equivalent codes have the same parameters (n,M,d) (and correct the same number of errors). Examples of equivalent codes (1) 0 1 1 0 0 1 2 0 1 0 1 1 2 0 0} 1 0 1 > IV054 1. Basics of coding theory and linear codes 38/77 EQUIVALENCE of CODES Definition Two g-ary codes are called equivalent if one can be obtained from the other by a combination of operations of the following type: [@ a permutation of the positions of the code. 93 a permutation of symbols appearing in a fixed position. Question: Let a code be displayed as an M x n matrix. To what correspond operations (a) and (b)? Claim: Distances between codewords are unchanged by operations (a), (b). Consequently, equivalent codes have the same parameters (n,M,d) (and correct the same number of errors). Examples of equivalent codes (1) 0 1 1 0 0 1 2 0 1 0 1 1 2 0 0} 1 0 1 > Lemma Any qr-ary (/i, M, c/)-code over an alphabet {0,1,..., q — 1} is equivalent to an (/i, M, c/)-code which contains the all-zero codeword 00 ... 0. Proof Trivial. IV054 1. Basics of coding theory and linear codes 38/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. IV054 1. Basics of coding theory and linear codes 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (n, M, d)-code. 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an c/-nary (n, M, d)-code. Theorem m Aq(n,l) = qn\ [B Aq(n, n) = q. 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(nJ d) is the largest M such that there is an c/-nary (n, /V/, d)-code. Theorem m Aq(n,l) = qn\ [B Aq(n, n) = q. Proof 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (/i, M, d)-code. Theorem m A^nl) m Aq(n, n) Proof M First claim is obvious; 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (n, M, d)-code. Theorem m Aq^^ = qn. [B Aq(n, n) = q. Proof [@ First claim is obvious; H3 Let C be an o;-nary (/i, M, n)-code. 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (/i, M, c/)-code. Theorem a Aq{rijl) = qn. H3 Aq(n, n) = q. Proof [@ First claim is obvious; 93 Let C be an qr-nary (/i, M, /i)-code. Any two distinct codewords of C have to differ in all n positions. IV054 1. Basics of coding theory and linear codes 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (/i, M, c/)-code. Theorem a Aq{rijl) = qn. H3 Aq(n, n) = q. Proof [@ First claim is obvious; 93 Let C be an qr-nary (/i, M, /i)-code. Any two distinct codewords of C have to differ in all n positions. Hence symbols in any fixed position of M codewords have to be different. IV054 1. Basics of coding theory and linear codes 39/77 A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (/i, M, c/)-code. Proof [@ First claim is obvious; [G Let C be an q-nary (/i, M, n)-code. Any two distinct codewords of C have to differ in all n positions. Hence symbols in any fixed position of M codewords have to be different. Therefore ^> Aq(n, n) < q. Theorem M Aq(n,l) m Aq(n, n) q n. IV054 1. Basics of coding theory and linear codes 39/77 THE MAIN CODING THEORY PROBLEM A good (/i, M, d)-code should have a small n, large M and large d. The main coding theory problem is to optimize one of the parameters n, M, d for given values of the other two. Notation: Aq(n, d) is the largest M such that there is an q-nary (/i, M, c/)-code. Proof [@ First claim is obvious; 93 Let C be an qr-nary (/i, M, /i)-code. Any two distinct codewords of C have to differ in all n positions. Hence symbols in any fixed position of M codewords have to be different. Therefore ^> Aq(n, n) < q. Since the q-nary repetition code is (/i, q, /i)-code, we get Aq(n, n) > q. Theorem M Aq{n,l) H3 Aq(n,n) n. 39/77 A SPHERE and its VOLUME Notation - is a set of all words of length n over the alphabet {0,1,2,..., q — 1} IV054 1. Basics of coding theory and linear codes 40/77 Notation Fq - is a set of all words of length n over the alphabet {0,1, 2,..., q — 1} Definition For any codeword u G Fq and any integer r > 0 the sphere of radius r and centre u is denoted by S(u, r) = {v eFnq\h{u, v) 0 the sphere of radius r and centre u is denoted by S(u,r) = {v £ Fq\ h(u, v) < r}. Theorem A sphere of radius r in Fq, 0 < r < n contains 0 + - 1) + - l)2 + ■ • - + C)(c7 - l)r words. 40/77 Notation Fq - is a set of all words of length n over the alphabet {0,1, 2,..., q — 1} Definition For any codeword u G Fq and any integer r > 0 the sphere of radius r and centre u is denoted by S(u, r) = {v eFnq\h(u,v) w(C). On the other hand, for some x G C w(C) w{x) = /?(x,0) > /7(C). IV054 1. Basics of coding theory and linear codes 47/77 BASIC PROPERTIES of LINEAR CODES Notation: Let w(x) (weight of x) denote the number of non-zero entries of x. Lemma If x, y G F£, then /?(x, y) = w(x — y). Proof x — y has non-zero entries in exactly those positions where x and y differ. Theorem Let C be a linear code and let weight of C, notation w(C), be the smallest of the weights of non-zero codewords of C. Then h(C) = w(C). Proof There are x,y G C such that h(C) = /?(x,y). Hence h(C) = w(x — y) > w(C). On the other hand, for some x G C w(C) = w(x) = h(x,0) > /?(C). Consequence ■ If C is a non-linear code with m codewords, then in order to determine h(C) one has to make in general (^) = 0(/r?2) comparisons in the worst case. IV054 1. Basics of coding theory and linear codes 47/77 BASIC PROPERTIES of LINEAR CODES Notation: Let w(x) (weight of x) denote the number of non-zero entries of x. Lemma If x, y G F£, then /?(x, y) = w(x — y). Proof x — y has non-zero entries in exactly those positions where x and y differ. Theorem Let C be a linear code and let weight of C, notation w(C), be the smallest of the weights of non-zero codewords of C. Then h(C) = w(C). Proof There are x,y G C such that h(C) = /?(x,y). Hence h(C) = w(x — y) > w(C). On the other hand, for some x G C w(C) = w(x) = h(x,0) > /?(C). Consequence ■ If C is a non-linear code with m codewords, then in order to determine h(C) one has to make in general (^) = Q(m2) comparisons in the worst case. ■ If C is a linear code with m codewords, then in order to determine h(C), m — 1 comparisons are enough. IV054 1. Basics of coding theory and linear codes 47/77 BASIC PROPERTIES of LINEAR CODES Notation: Let w(x) (weight of x) denote the number of non-zero entries of x. Lemma If x, y G F£, then /?(x, y) = w(x — y). Proof x — y has non-zero entries in exactly those positions where x and y differ. Theorem Let C be a linear code and let weight of C, notation w(C), be the smallest of the weights of non-zero codewords of C. Then h(C) = w(C). Proof There are x,y G C such that h(C) = /?(x,y). Hence h(C) = w(x — y) > w(C). On the other hand, for some x G C w(C) = w(x) = h(x,0) > /?(C). Consequence ■ If C is a non-linear code with m codewords, then in order to determine h(C) one has to make in general (^) = Q(m2) comparisons in the worst case. ■ If C is a linear code with m codewords, then in order to determine 6(C), m — 1 comparisons are enough. IV054 1. Basics of coding theory and linear codes 47/77 BASIC PROPERTIES of LINEAR CODES II If C is a linear [n, /c]-code, then it has many basis l~ consisting of k codewords and such that each codeword of C is a linear combination of the codewords from any f. IV054 1. Basics of coding theory and linear codes 48/77 BASIC PROPERTIES of LINEAR CODES II If C is a linear [/i, /c]-code, then it has many basis l~ consisting of k codewords and such that each codeword of C is a linear combination of the codewords from any l~. Example Code C4 = {0000000,1111111,1000101,1100010, 0110001,1011000,0101100,0010110, 0001011,0111010,0011101,1001110, 0100111,1010011,1101001,1110100} has, as one of its bases, the set IV054 1. Basics of coding theory and linear codes 48/77 BASIC PROPERTIES of LINEAR CODES II If C is a linear [/i, /c]-code, then it has many basis l~ consisting of k codewords and such that each codeword of C is a linear combination of the codewords from any l~. Example Code C4 = {0000000,1111111,1000101,1100010, 0110001,1011000,0101100,0010110, 0001011,0111010,0011101,1001110, 0100111,1010011,1101001,1110100} has, as one of its bases, the set {1111111,1000101,1100010,0110001}. IV054 1. Basics of coding theory and linear codes 48/77 BASIC PROPERTIES of LINEAR CODES II If C is a linear [/i, /c]-code, then it has many basis l~ consisting of k codewords and such that each codeword of C is a linear combination of the codewords from any l~. Example Code C4 = {0000000,1111111,1000101,1100010, 0110001,1011000,0101100,0010110, 0001011,0111010,0011101,1001110, 0100111,1010011,1101001,1110100} has, as one of its bases, the set {1111111,1000101,1100010,0110001}. How many different bases has a linear code? IV054 1. Basics of coding theory and linear codes 48/77 BASIC PROPERTIES of LINEAR CODES II If C is a linear [/i, /c]-code, then it has many basis l~ consisting of k codewords and such that each codeword of C is a linear combination of the codewords from any l~. Example Code C4 = {0000000,1111111,1000101,1100010, 0110001,1011000,0101100,0010110, 0001011,0111010,0011101,1001110, 0100111,1010011,1101001,1110100} has, as one of its bases, the set {1111111,1000101,1100010,0110001}. How many different bases has a linear code? Theorem A binary linear code of dimension k has nnt"01(2fc-2/) bases. IV054 1. Basics of coding theory and linear codes 48/77 If a code C has 2 uu codewords, then there is no way to write down and/or to store all its codewords. IV054 1. Basics of coding theory and linear codes 49/77 If a code C has 2200 codewords, then there is no way to write down and/or to store all its codewords. WHY IV054 1. Basics of coding theory and linear codes 49/77 If a code C has 2200 codewords, then there is no way to write down and/or to store all its codewords. WHY However, In case we have [2200,200] linear code C, then to specify/store fully C we need only to store IV054 1. Basics of coding theory and linear codes 49/77 If a code C has 2200 codewords, then there is no way to write down and/or to store all its codewords. WHY However, In case we have [2 ,200] linear code C, then to specify/store fully C we need only to store 200 codewords If a code C has 2 uu codewords, then there is no way to write down and/or to store all its codewords. WHY However, In case we have [2 ,200] linear code C, then to specify/store fully C we need only to store 200 codewords - from one of its basis. IV054 1. Basics of coding theory and linear codes 49/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I. IV054 1. Basics of coding theory and linear codes 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. IV054 1. Basics of coding theory and linear codes 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. □ Minimal distance h(C) is easy to compute if C is a linear code. IV054 1. Basics of coding theory and linear codes 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. Q Minimal distance h(C) is easy to compute if C is a linear code. Q Linear codes have simple specifications. 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. Q Minimal distance h(C) is easy to compute if C is a linear code. Q Linear codes have simple specifications. To specify a non-linear code usually all codewords have to be listed 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. Q Minimal distance h(C) is easy to compute if C is a linear code. Q Linear codes have simple specifications. ■ To specify a non-linear code usually all codewords have to be listed. To specify a linear [/?, /c]-code it is enough to list k codewords (of a basis). 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. □ Minimal distance h(C) is easy to compute if C is a linear code. Q Linear codes have simple specifications. ■ To specify a non-linear code usually all codewords have to be listed. To specify a linear [/i, /c]-code it is enough to list k codewords (of a basis). Example One of the generator matrices of the binary code '0 0 0> Oil 1 0 1 IV054 1. Basics of coding theory and linear codes 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. □ Minimal distance h(C) is easy to compute if C is a linear code. Q Linear codes have simple specifications. ■ To specify a non-linear code usually all codewords have to be listed. To specify a linear [/i, /c]-code it is enough to list k codewords (of a basis) Example One of the generator matrices of the binary code (0 0 0} C2= < 0 1 1 1 0 1 1 1 0 > is the matrix '0 1 1 0 and one of the generator matrices of the code Ca is (1 1 1 1 10 0 0 1 1 110 0 0 1 0 1 1\ 1 0 ^0 110001/ IV054 1. Basics of coding theory and linear codes 50/77 ADVANTAGES and DISADVANTAGES of LINEAR CODES I Advantages - are big. □ Minimal distance h(C) is easy to compute if C is a linear code. Q Linear codes have simple specifications. ■ To specify a non-linear code usually all codewords have to be listed. To specify a linear [/i, /c]-code it is enough to list k codewords (of a basis). Example One of the generator matrices of the binary code fO 0 Ol C2 = < . ~ 1 / is the matrix 10 1 ,1 1 0, and one of the generator matrices of the code 4is 1 1 0 0 0 /l 1 1 1 1 1 0 0 0 1 0 1 1 0 1 1\ \0 1 1 0 0 0 1/ b There are simple encoding/decoding procedures for linear codes. IV054 1. Basics of coding theory and linear codes 50/77 ADANTAGES and DISADVANTAGES of LINEAR CODES II IV054 1. Basics of coding theory and linear codes 51/77 ADANTAGES and DISADVANTAGES of LINEAR CODES II. Disadvantages of linear codes are small: Linear g-codes are not defined unless q is a power of a prime. IV054 1. Basics of coding theory and linear codes 51/77 ADANTAGES and DISADVANTAGES of LINEAR CODES II Disadvantages of linear codes are small: □ Linear q-codes are not defined unless q is a power of a prime. b The restriction to linear codes might be a restriction to weaker codes than sometimes desired. IV054 1. Basics of coding theory and linear codes 51/77 EQUIVALENCE of LINEAR CODES I IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [0 permutation of the words or positions of the code; 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over Fq if one matrix can be obtained from the other by a sequence of the following operations: IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over Fq if one matrix can be obtained from the other by a sequence of the following operations: [@ permutation of the rows IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over Fq if one matrix can be obtained from the other by a sequence of the following operations: [@ permutation of the rows 93 multiplication of a row by a non-zero scalar IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over Fq if one matrix can be obtained from the other by a sequence of the following operations: [@ permutation of the rows 93 multiplication of a row by a non-zero scalar [0 addition of one row to another IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over Fq if one matrix can be obtained from the other by a sequence of the following operations: [@ permutation of the rows 93 multiplication of a row by a non-zero scalar [0 addition of one row to another [0 permutation of columns IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over F£ if one matrix can be obtained from the other by a sequence of the following operations: [@ permutation of the rows 93 multiplication of a row by a non-zero scalar [0 addition of one row to another [0 permutation of columns M multiplication of a column by a non-zero scalar IV054 1. Basics of coding theory and linear codes 52/77 EQUIVALENCE of LINEAR CODES I Definition Two linear codes on GF(q) are called equivalent if one can be obtained from another by the following operations: [@ permutation of the words or positions of the code; [B multiplication of symbols appearing in a fixed position by a non-zero scalar. Theorem Two k x n matrices generate equivalent linear [n, /c]-codes over Fq if one matrix can be obtained from the other by a sequence of the following operations: [@ permutation of the rows 93 multiplication of a row by a non-zero scalar [0 addition of one row to another [0 permutation of columns M multiplication of a column by a non-zero scalar Proof Operations (a) - (c) just replace one basis by another. Last two operations convert a generator matrix to one of an equivalent code. IV054 1. Basics of coding theory and linear codes 52/77 554,0-1 IV054 1. Basics of coding theory and linear codes 53/77 EQUIVALENCE of LINEAR CODES II Theorem Let G be a generator matrix of an [n, /c]-code. Rows of G are then linearly independent .By operations (a) - (e) the matrix G can be transformed into the form [//c|y4] where Ik is the k x k identity matrix, and A is a k x (n — k) matrix. 29 IV054 1. Basics of coding theory and linear codes 53/77 EQUIVALENCE of LINEAR CODES II Theorem Let G be a generator matrix of an [n, /c]-code. Rows of G are then linearly independent .By operations (a) - (e) the matrix G can be transformed into the form [lk\A] where Ik is the k x k identity matrix, and A is a k x (n — k) matrix. Example 1 1 1 1 1 A 1 0 0 0 1 0 1 1 1 0 0 0 1 0 —r 1 1 0 0 0 V A 0 0 0 1 0 i\ 0 0 0 1 0 0 1 1 1 0 1 0 0 1 0 0 1 1 1 0 0 1 1 1 0 1 —r 0 0 1 1 1 0 1 0 0 1 1 1 0 0 1 1 1 IV054 1. Basics of coding theory and linear codes 53/77 ENCODING with LINEAR CODES IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. IV054 1. Basics of coding theory and linear codes 54/77 is a vector x matrix multiplication Let C be a linear [/?, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fk.) 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fq.) Encoding of a message u = (l/i, ..., Uk) using the generator matrix G: IV054 1. Basics of coding theory and linear codes 54/77 is a vector x matrix multiplication Let C be a linear [/?, /c]-code over F£ with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fk.) Encoding of a message u = (m,..., Uk) using the generator matrix G: u • G = 5Z/=i L/'r' wnere I? • • • j Oc are rows of G. 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fk.) Encoding of a message u = (l/i, ..., Uk) using the generator matrix G: u • G = 5^f=i u>r> wnere rii • • • 5 rk are rows of G. Example Let C be a [7,4]-code with the generator matrix G= "1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1" 1 0 1 A message (l/i, l/2, 1/3, U4) is encoded as:??? IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fk.) Encoding of a message u = (l/i, ..., Uk) using the generator matrix G: u • G = 5^f=i u>r> wnere rii • • • 5 rk are rows of G. Example Let C be a [7,4]-code with the generator matrix G= "1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1" 1 0 1 A message (l/i, l/2, 1/3, U4) is encoded as:??? For example: 0 0 0 0 is encoded as? ..... IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fq.) Encoding of a message u = (l/i, ..., Uk) using the generator matrix G: u • G = 5^f=i u>r> wnere rii • • • 5 rk are rows of G. Example Let C be a [7,4]-code with the generator matrix G= "1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1" 1 0 1 A message (l/i, l/2, 1/3, U4) is encoded as:??? For example: 0 0 0 0 is encoded as? ..... 0000000 1 0 0 0 is encoded as? ..... IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over Fq with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fq.) Encoding of a message u = (l/i, ..., Uk) using the generator matrix G: u • G = 5^f=i u>r> wnere rii • • • 5 rk are rows of G. Example Let C be a [7,4]-code with the generator matrix G= "1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1" 1 0 1 A message (l/i, l/2, 1/3, U4) is encoded as:??? For example: 0 0 0 0 10 0 0 s encoded as? ..... 0000000 s encoded as? ..... 1000101 1 1 1 0 is encoded as? IV054 1. Basics of coding theory and linear codes 54/77 ENCODING with LINEAR CODES is a vector x matrix multiplication Let C be a linear [/i, /c]-code over F£ with a generator k x n matrix G. Theorem C has qk codewords. Proof Theorem follows from the fact that each codeword of C can be expressed uniquely as a linear combination of the basis codewords/vectors. Corollary The code C can be used to encode uniquely qk messages. (Let us identify messages with elements of Fk.) Encoding of a message u = (l/i, ..., Uk) using the generator matrix G: u • G = 5^f=i u>r> wnere rii • • • 5 rk are rows of G. Example Let C be a [7,4]-code with the generator matrix G= "1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1" 1 0 1 A message (l/i, l/2, 1/3, ua) is encoded as:??? For example: 0 0 0 0 10 0 0 1110 s encoded as? ..... 0000000 s encoded as? ..... 1000101 s encoded as? ..... 1110100 IV054 1. Basics of coding theory and linear codes 54/77 IV054 1. Basics of coding theory and linear codes 55/77 with linear codes Theorem If G = {w,}k=1 is a generator matrix of a binary linear code C of length n and dimension k, then the set of codewords/vectors v = ug ranges over all 2kn words of length k Therefore C = {ug "£{0,1}"} 55/77 UNIQUENESS of ENCODING with linear codes Theorem If G = {w/}f=1 is a generator matrix of a binary linear code C of length n and dimension k, then the set of codewords/vectors v = ug ranges over all 2kn words of length k Therefore Moreover c = {ug | u e {0, l}"} U\G = U2 if and only if ui = u2 IV054 1. Basics of coding theory and linear codes 55/77 56/77 APPENDIX II. APPENDIX II. IV054 1. Basics of coding theory and linear codes EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). The entropy of X is defined by S(X) = -J2xp(x)lg p(x) and it is considered to be the information content of X. Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). The entropy of X is defined by s(x) = - Ex p(*)te pM and it is considered to be the information content of X. rbinary variable X which takes on the value 1 with probability p and the value 0 with probability 1 — p, then the information content of X is: Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). The entropy of X is defined by s(x) = - Ex p(*)te pM and it is considered to be the information content of X. rbinary variable X which takes on the value 1 with probability p and the value 0 with probability 1 — p, then the information content of X is: S(X) = H(p) = -pigp-(i- p)lg(l - pf Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). The entropy of X is defined by s(x) = - Ex p(*)te pM and it is considered to be the information content of X. rbinary variable X which takes on the value 1 with probability p and the value 0 with probability 1 — p, then the information content of X is: S(X) = H(p) = -pigp-(i- p)lg(l - pf Problem: What is the minimal number of bits needed to transmit n values of XI Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 EFFICIENT TRANMIION of INFORMATION Important problems of information theory are how to define formally such concepts as information and how to store or transmit information efficiently. Let X be a random variable (source) which takes any value x with probability p(x). The entropy of X is defined by s(x) = - Ex p(*)te pM and it is considered to be the information content of X. rbinary variable X which takes on the value 1 with probability p and the value 0 with probability 1 — p, then the information content of X is: S(X) = H(p) = -pigp-(i- p)lg(l - pf Problem: What is the minimal number of bits needed to transmit n values of XI Basic idea: Encode more (less) probable outputs of X by shorter (longer) binary words. Example (Moorse code - 1838) a .- b c d -.. e . f g - h .... i .. j ■ k -.- 1 m - n -. o — p ■-■ q -■- r .-. s ... t - u .. v w .- y -■- z -.. Notation Ig (Ln) [log] will be used for binary, natural and decimal logarithms. IV054 1. Basics of coding theory and linear codes 57/77 IV054 1. Basics of coding theory and linear codes 58/77 SHANNON'S NOISELESS CODING THEOREM IV054 1. Basics of coding theory and linear codes 59/77 SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. IV054 1. Basics of coding theory and linear codes 59/77 SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. 59/77 SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. Example: Let a source X produce the value 1 with probability p = \ and the value 0 with probability 1 — p = f 59/77 SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. Example: Let a source X produce the value 1 with probability p = \ and the value 0 with probability 1 — p = | Assume we want to encode blocks of the outputs of X of length 4. SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. Example: Let a source X produce the value 1 with probability p = \ and the value 0 with probability 1 — p = | Assume we want to encode blocks of the outputs of X of length 4. By SHANNON'S theorem we need 4H(|) = 3.245 bits per blocks (in average) 59/77 SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. Example: Let a source X produce the value 1 with probability p = \ and the value 0 with probability 1 — p = | Assume we want to encode blocks of the outputs of X of length 4. By SHANNON'S theorem we need 4H(|) = 3.245 bits per blocks (in average) A simple and practical method known as Huffman code requires in this case 3.273 bits per a 4-bit message. mess. code mess. code mess. code mess. code 0000 10 0100 010 1000 011 1100 11101 0001 000 0101 11001 1001 11011 1101 111110 0010 001 0110 11010 1010 11100 1110 111101 0011 11000 0111 1111000 1011 111111 1111 1111001 IV054 1. Basics of coding theory and linear codes 59/77 SHANNON'S NOISELESS CODING THEOREM SHANNON'S noiseless coding theorem says that in order to transmit n values of X, we need, and it is sufficient, to use nS(X) bits. More exactly, we cannot do better than the bound nS(X) says, and we can reach the bound nS(X) as close as desirable. Example: Let a source X produce the value 1 with probability p = \ and the value 0 with probability 1 — p = | Assume we want to encode blocks of the outputs of X of length 4. By SHANNON'S theorem we need 4H(|) = 3.245 bits per blocks (in average) A simple and practical method known as Huffman code requires in this case 3.273 bits per a 4-bit message. mess. code mess. code mess. code mess. code 0000 10 0100 010 1000 011 1100 11101 0001 000 0101 11001 1001 11011 1101 111110 0010 001 0110 11010 1010 11100 1110 111101 0011 11000 0111 1111000 1011 111111 1111 1111001 Observe that this is a prefix code - no codeword is a prefix of another codeword. IV054 1. Basics of coding theory and linear codes 59/77 DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,xn with probabilities IV054 1. Basics of coding theory and linear codes DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,xn with probabilities pi > ... > pn. Stage 1 - shrinking of the sequence. ■ Replace xn-i,xn with a new object yn-i with probability pn-i + pn and rearrange sequence so one has again non-increasing probabilities. IV054 1. Basics of coding theory and linear codes 60/77 DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,xn with probabilities pi > ... > pn. Stage 1 - shrinking of the sequence. ■ Replace xn-i,xn with a new object yn-i with probability pn-i + pn and rearrange sequence so one has again non-increasing probabilities. ■ Keep doing the above step till the sequence shrinks to two objects. .50 .50 .50 .50 .50 .50 .50 .15 .15 .15 .15 .22 .28 .50 .02 IV054 1. Basics of coding theory and linear codes 60/77 DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,xn with probabilities pi > ... > pn. Stage 1 - shrinking of the sequence. ■ Replace xn-i,xn with a new object yn-i with probability pn-i + pn and rearrange sequence so one has again non-increasing probabilities. ■ Keep doing the above step till the sequence shrinks to two objects. .50 .50 .50 .50 .50 .50 .50 .15 .15 .15 .15 .22 .28 .50 .02 IV054 1. Basics of coding theory and linear codes 60/77 DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,x„ with probabilities pi > ... > pn. Stage 1 - shrinking of the sequence. ■ Replace xn-i,xn with a new object yn-i with probability pn-i + pn and rearrange sequence so one has again non-increasing probabilities. ■ Keep doing the above step till the sequence shrinks to two objects. .50 .50 .50 .50 .50 .50 .50 .15 .15 .15 .15 .22 .28 .50 .02 Stage 2 - extending the code - Apply again and again the following method. IV054 1. Basics of coding theory and linear codes 60/77 DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,xn with probabilities pi > ... > pn. Stage 1 - shrinking of the sequence. ■ Replace xn-i,xn with a new object yn-i with probability pn-i + pn and rearrange sequence so one has again non-increasing probabilities. ■ Keep doing the above step till the sequence shrinks to two objects. .50 .50 .50 .50 .50 .50 .50 .15 .15 .15 .15 .22 .28 .50 Stage 2 - extending the code - Apply again and again the following method. If C = {ci,..., cr} is a prefix optimal code for a source Sr, then C = {c[,..., cfr+1} is an optimal code for Sr+i, where .02 IV054 1. Basics of coding theory and linear codes 60/77 DESIGN of HUFFMAN CODE II Given a sequence of n objects, xi,... ,xn with probabilities pi > ... > pn. Stage 1 - shrinking of the sequence. ■ Replace xn-i,xn with a new object yn-i with probability pn-i + pn and rearrange sequence so one has again non-increasing probabilities. ■ Keep doing the above step till the sequence shrinks to two objects. .50 .50 .50 .50 .15 .15 .15 .15 .12 .12 .12 .10 .10 .10^ \.12 .04 .05 .08^ \.1U .04^ v04 \ vUb .03^ VU4 .02 .50 .50 .50 .22 .28 .50 .15 \ A* Stage 2 - extending the code - Apply again and again the following method. If C = {ci,..., cr} is a prefix optimal code for a source Sr, then C = {c[,..., cfr+1} is an optimal code for Sr+i, where c\ = a 1 < /' < r - 1 c'r = crl c'r+l = Cr0. IV054 1. Basics of coding theory and linear codes 60/77 DESIGN of HUFFMAN CODE II IV054 1. Basics of coding theory and linear codes 61/77 DESIGN of HUFFMAN CODE II Stage 2 Apply again and again the following method: IV054 1. Basics of coding theory and linear codes 61/77 DESIGN of HUFFMAN CODE II Stage 2 Apply again and again the following method: If C = {ci,..., cr} is a prefix optimal code for a source Sr, then C = {c[,..., cfr+1} is an optimal code for Sr+i, where c\ = a 1 < /' < r - 1 c'r = crl cr+i = cr0. 0.5 - 1 0.5 - 0 0.28 - 01 0.15 - 011 0.13 - 010 0.08 - 0101 0.05 - 0100 0.22 - 00 0.12 - 001 0.1 ■ ■ 000 0.04 - 01010 0.04 - 01011 0.03 - 01001 0.02 - 01000 1 .50 .50 .50 .50 .50 .50 .50 011 .15 .15 .15 .15 .22 28-, .50 001 .12 .12 .12 .13 000 .10 .10 .10 01011 .04 .05 .08 01010 .04^ 01001 .03 01000 .02 .04 .04 .05 .12 .10 .15 i v22 ,13 0 0 0 0 0 0 IV054 1. Basics of coding theory and linear codes 61/77 A BIT OF HISTORY I The subject of error-correcting codes arose originally as a response to practical problems in the reliable communication of digitally encoded information. IV054 1. Basics of coding theory and linear codes 62/77 A BIT OF HISTORY I The subject of error-correcting codes arose originally as a response to practical problems in the reliable communication of digitally encoded information. The discipline was initiated in the paper Claude Shannon: A mathematical theory of communication, Bell SST.Tech. Journal V27, 1948, 379-423, 623-656 IV054 1. Basics of coding theory and linear codes 62/77 A BIT OF HISTORY I The subject of error-correcting codes arose originally as a response to practical problems in the reliable communication of digitally encoded information. The discipline was initiated in the paper Claude Shannon: A mathematical theory of communication, Bell SST.Tech. Journal V27, 1948, 379-423, 623-656 SHANNON'S paper started the scientific discipline information theory and error-correcting codes are its part. IV054 1. Basics of coding theory and linear codes 62/77 A BIT OF HISTORY I The subject of error-correcting codes arose originally as a response to practical problems in the reliable communication of digitally encoded information. The discipline was initiated in the paper Claude Shannon: A mathematical theory of communication, Bell SST.Tech. Journal V27, 1948, 379-423, 623-656 SHANNON'S paper started the scientific discipline information theory and error-correcting codes are its part. Originally, information theory was a part of electrical engineering. IV054 1. Basics of coding theory and linear codes 62/77 A BIT OF HISTORY The subject of error-correcting codes arose originally as a response to practical problems in the reliable communication of digitally encoded information. The discipline was initiated in the paper Claude Shannon: A mathematical theory of communication, Bell SST.Tech. Journal V27, 1948, 379-423, 623-656 SHANNON'S paper started the scientific discipline information theory and error-correcting codes are its part. Originally, information theory was a part of electrical engineering. Nowadays, it is an important part of mathematics and also of informatics. IV054 1. Basics of coding theory and linear codes 62/77 APPENDIX II- ENTROPY - basics - I The concept of ENTROPY is one of the most basic and important in modern science, especially in physics, mathematics and information theory. So called physical entropy is a measure of the unavailable energy in a closed thermodynamics system (that is usually considered to be a measure of the system's disorder). Entropy of an object is a measure of the amount of energy in the object which is unable to do some work. Entropy is also a measure of the number of possible arrangements of the atoms a system can have. IV054 1. Basics of coding theory and linear codes 63/77 Information entropy So called information entropy is a measure of uncertainty and randomness. 64/77 Information entropy So called information entropy is a measure of uncertainty and randomness. Example If we have a process (a random variable) X producing values 0 and 1, both with probability ± then we are completely uncertain what will be the next value produced by the process. IV054 1. Basics of coding theory and linear codes 64/77 Information entropy So called information entropy is a measure of uncertainty and randomness. Example If we have a process (a random variable) X producing values 0 and 1, both with probability ± then we are completely uncertain what will be the next value produced by the process. On the other side, if we have a process (random variable) Y producing value 0 with probability | and value 1 with probability |, then we are more certain that the next value of the process will be 1 than 0. History Rudolf Clausius coined the term entropy in 1865. IV054 1. Basics of coding theory and linear codes 64/77 IV054 1. Basics of coding theory and linear codes 65/77 A BIT OF HISTORY II SHANNON'S VIEW In the introduction to his seminal paper "A mathematical theory of communication" Shannon wrote: IV054 1. Basics of coding theory and linear codes 65/77 A BIT OF HISTORY II SHANNON'S VIEW In the introduction to his seminal paper "A mathematical theory of communication" Shannon wrote: The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. IV054 1. Basics of coding theory and linear codes 65/77 IV054 1. Basics of coding theory and linear codes 66/77 APPENDIX III. SOFT DECODING APPENDIX IV054 1. Basics of coding theory and linear codes HARD VERSUS SOFT DECODING I At the beginning of this chapter the process encoding-channel transmission-decoding was illustrated as follows: channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) IV054 1. Basics of coding theory and linear codes 68/77 HARD VERSUS SOFT DECODING I At the beginning of this chapter the process encoding-channel transmission-decoding was illustrated as follows: channel message w Encoding code word code word _ Decoding W user source C(W) noise C'(W) In that process a binary message is at first encoded into a binary codeword, then transmitted through a noisy channel, and, finally, the decoder receives, for decoding, a potentially erroneous binary message and makes an error correction. IV054 1. Basics of coding theory and linear codes 68/77 HARD VERSUS SOFT DECODING I At the beginning of this chapter the process encoding-channel transmission-decoding was illustrated as follows: channel message w Encoding code word code word . Decoding W user source C(W) noise C'(W) In that process a binary message is at first encoded into a binary codeword, then transmitted through a noisy channel, and, finally, the decoder receives, for decoding, a potentially erroneous binary message and makes an error correction. This is a simplified view of the whole process. 68/77 HARD VERSUS SOFT DECODING I At the beginning of this chapter the process encoding-channel transmission-decoding was illustrated as follows: channel message w Encoding code word code word . Decoding W user source C(W) noise C'(W) In that process a binary message is at first encoded into a binary codeword, then transmitted through a noisy channel, and, finally, the decoder receives, for decoding, a potentially erroneous binary message and makes an error correction. This is a simplified view of the whole process. In practice the whole process looks quite differently. 68/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is ■ a binary message is at first transferred to a binary codeword; IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is ■ a binary message is at first transferred to a binary codeword; ■ the binary codeword is then transferred to an analogue signal; IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is ■ a binary message is at first transferred to a binary codeword; ■ the binary codeword is then transferred to an analogue signal; ■ the analogue signal is then transmitted through a noisy channel IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is ■ a binary message is at first transferred to a binary codeword; ■ the binary codeword is then transferred to an analogue signal; ■ the analogue signal is then transmitted through a noisy channel ■ the received analogous signal is then transferred to a binary form that can be used for decoding and, finally IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is ■ a binary message is at first transferred to a binary codeword; ■ the binary codeword is then transferred to an analogue signal; ■ the analogue signal is then transmitted through a noisy channel ■ the received analogous signal is then transferred to a binary form that can be used for decoding and, finally ■ decoding takes place. In case the analogous noisy signal is transferred before decoding to the binary signal we talk about a hard decoding; IV054 1. Basics of coding theory and linear codes 69/77 HARD versus SOFT DECODING II Here is a more realistic view of the whole encoding-transmission-decoding process: message w *" digital digital encoder digital analogue encoder ! noisy ; analogue digital decoder digital digital decoder i channel ! L j that is ■ a binary message is at first transferred to a binary codeword; ■ the binary codeword is then transferred to an analogue signal; ■ the analogue signal is then transmitted through a noisy channel ■ the received analogous signal is then transferred to a binary form that can be used for decoding and, finally ■ decoding takes place. In case the analogous noisy signal is transferred before decoding to the binary signal we talk about a hard decoding; In case the output of analogous-digital decoding is a pair (p^, b) where pb is the probability that the output is the bit b (or a weight of such a binary output (often given by a number from an interval {—Vmax, Vmax)), we talk about a soft decoding. IV054 1. Basics of coding theory and linear codes 69/77 In order to deal with such a more general model of transmission and soft decoding, it is common to use, instead of the binary symbols 0 and 1 so-called antipodal binary symbols +1 and —1 that are represented electronically by voltage +1 and —1. 70/77 HARD versus SOFT DECODING - III. In order to deal with such a more general model of transmission and soft decoding, it is common to use, instead of the binary symbols 0 and 1 so-called antipodal binary symbols +1 and —1 that are represented electronically by voltage +1 and —1. A transmission channel with analogue antipodal signals can then be depicted as follows. histogram of received values noise -1 0 +1 -1 0 +1 IV054 1. Basics of coding theory and linear codes 70/77 HARD versus SOFT DECODING - III. In order to deal with such a more general model of transmission and soft decoding, it is common to use, instead of the binary symbols 0 and 1 so-called antipodal binary symbols +1 and —1 that are represented electronically by voltage +1 and —1. A transmission channel with analogue antipodal signals can then be depicted as follows. noise -1 0 +1 histogram of received values -1 0 +1 A very important case in practise, especially for space communication, is so-called additive white Gaussian noise (AWN) and the channel with such a noise is called Gaussian channel. IV054 1. Basics of coding theory and linear codes 70/77 HARD versus SOFT DECODING - COMMENTS When the signal received by the decoder comes from a devise capable of producing estimations of an analogue nature on the binary transmitted data the error correction capability of the decoder can greatly be improved. IV054 1. Basics of coding theory and linear codes 71/77 HARD versus SOFT DECODING - COMMENTS When the signal received by the decoder comes from a devise capable of producing estimations of an analogue nature on the binary transmitted data the error correction capability of the decoder can greatly be improved. Since the decoder has in such a case an information about the reliability of data received, decoding on the basis of finding the codeword with minimal Hamming distance does not have to be optimal and the optimal decoding may depend on the type of noise involved. IV054 1. Basics of coding theory and linear codes 71/77 HARD versus SOFT DECODING - COMMENTS When the signal received by the decoder comes from a devise capable of producing estimations of an analogue nature on the binary transmitted data the error correction capability of the decoder can greatly be improved. Since the decoder has in such a case an information about the reliability of data received, decoding on the basis of finding the codeword with minimal Hamming distance does not have to be optimal and the optimal decoding may depend on the type of noise involved. For example, in an important practical case of the Gaussian white noise one search at the minimal likelihood decoding for a codeword with minimal Euclidean distance. IV054 1. Basics of coding theory and linear codes 71/77 IV054 1. Basics of coding theory and linear codes 72/77 BASIC FAMILIES of CODES Two basic families of codes are Block codes called also as algebraic codes that are appropriate to encode blocks of date of the same length and independent one from the other. IV054 1. Basics of coding theory and linear codes 72/77 BASIC FAMILIES of CODES Two basic families of codes are Block codes called also as algebraic codes that are appropriate to encode blocks of date of the same length and independent one from the other. Their encoders have often a huge number of internal states and decoding algorithms are based on techniques specific for each code. IV054 1. Basics of coding theory and linear codes 72/77 BASIC FAMILIES of CODES Two basic families of codes are Block codes called also as algebraic codes that are appropriate to encode blocks of date of the same length and independent one from the other. Their encoders have often a huge number of internal states and decoding algorithms are based on techniques specific for each code. Stream codes called also as convolution codes that are used to protect continuous flows of data. IV054 1. Basics of coding theory and linear codes 72/77 BASIC FAMILIES of CODES Two basic families of codes are Block codes called also as algebraic codes that are appropriate to encode blocks of date of the same length and independent one from the other. Their encoders have often a huge number of internal states and decoding algorithms are based on techniques specific for each code. Stream codes called also as convolution codes that are used to protect continuous flows of data.Their encoders often have only small number of internal states and then decoders can use a complete representation of states using so called trellises, iterative approaches via several simple decoders and an exchange of information of probabilistic nature. IV054 1. Basics of coding theory and linear codes 72/77 BASIC FAMILIES of CODES Two basic families of codes are Block codes called also as algebraic codes that are appropriate to encode blocks of date of the same length and independent one from the other. Their encoders have often a huge number of internal states and decoding algorithms are based on techniques specific for each code. Stream codes called also as convolution codes that are used to protect continuous flows of data.Their encoders often have only small number of internal states and then decoders can use a complete representation of states using so called trellises, iterative approaches via several simple decoders and an exchange of information of probabilistic nature. Hard decoding is used mainly for block codes and soft one for stream codes. However, distinctions between these two families of codes are tending to blur. IV054 1. Basics of coding theory and linear codes 72/77 STORY of MORSE TELEGRAPH - I. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. ■ The first telegraph designed Charles Whether Stone and demonstrated it at the distance 2.4 km. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. ■ The first telegraph designed Charles Whether Stone and demonstrated it at the distance 2.4 km. ■ Samuel Morse made a significant improvement by designing a telegraph that could not only send information, but using a magnet at other end it could also write the transmitted symbol on a paper. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. ■ The first telegraph designed Charles Whether Stone and demonstrated it at the distance 2.4 km. ■ Samuel Morse made a significant improvement by designing a telegraph that could not only send information, but using a magnet at other end it could also write the transmitted symbol on a paper. ■ Morse was a portrait painter whose hobby were electrical machines. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. ■ The first telegraph designed Charles Whether Stone and demonstrated it at the distance 2.4 km. ■ Samuel Morse made a significant improvement by designing a telegraph that could not only send information, but using a magnet at other end it could also write the transmitted symbol on a paper. ■ Morse was a portrait painter whose hobby were electrical machines. Morse and his assistant Alfredvail invented "Morse alphabet" around 1842. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. ■ The first telegraph designed Charles Whether Stone and demonstrated it at the distance 2.4 km. ■ Samuel Morse made a significant improvement by designing a telegraph that could not only send information, but using a magnet at other end it could also write the transmitted symbol on a paper. ■ Morse was a portrait painter whose hobby were electrical machines. Morse and his assistant Alfredvail invented "Morse alphabet" around 1842. ■ After US Congress approved 30,000 $ on 3.3.1943 for building a telegraph connection between Washington and Baltimore, the line was built fast, and already on 24.5.1943 the first telegraph message was sent: "What Hath God Wrought" - Co Boh sent". IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - I. ■ In 1825 William Sturgeon discovered electromagnet and showed that using electricity one can make to ring a ring that was far away. ■ The first telegraph designed Charles Whether Stone and demonstrated it at the distance 2.4 km. ■ Samuel Morse made a significant improvement by designing a telegraph that could not only send information, but using a magnet at other end it could also write the transmitted symbol on a paper. ■ Morse was a portrait painter whose hobby were electrical machines. Morse and his assistant Alfredvail invented "Morse alphabet" around 1842. ■ After US Congress approved 30,000 $ on 3.3.1943 for building a telegraph connection between Washington and Baltimore, the line was built fast, and already on 24.5.1943 the first telegraph message was sent: "What Hath God Wrought" - " Co Boh sent" . ■ The era of Morse telegraph ended on 26.1.2006 when the main telegraph company in US, Western Union, announced cancelation of all telegraph services. IV054 1. Basics of coding theory and linear codes 73/77 STORY of MORSE TELEGRAPH - II IV054 1. Basics of coding theory and linear codes 74/77 In his telegraphs Moor se used the following two-character audio alphabet ■ TIT or dot — a short tone lasting four hundredths of second; ■ TAT or dash — a long tone lasting twelve hundredths of second. 74/77 STORY of MORSE TELEGRAPH - II. n his telegraphs Moor se used the following two-character audio alphabet ■ TIT or dot — a short tone lasting four hundredths of second; ■ TAT or dash — a long tone lasting twelve hundredths of second. Morse could called these tones as 0 and 1 IV054 1. Basics of coding theory and linear codes 74/77 STORY of MORSE TELEGRAPH - II In his telegraphs Moor se used the following two-character audio alphabet ■ TIT or dot — a short tone lasting four hundredths of second; ■ TAT or dash — a long tone lasting twelve hundredths of second. Morse could called these tones as 0 and 1 The binary elements 0 and 1 were first called bits by J. W. Tickle in 1943. IV054 1. Basics of coding theory and linear codes 74/77 The ISBN-code I Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: IV054 1. Basics of coding theory and linear codes 75/77 The ISBN-code I Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 such that IV054 1. Basics of coding theory and linear codes 75/77 The ISBN-code I Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 such that £/=i(n - />/ = 0 (modll) The publisher has to put xio = X if xio is to be 10. IV054 1. Basics of coding theory and linear codes 75/77 Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 such that £/=i(n - />/ = 0 {modll) The publisher has to put xio = X if xio is to be 10. The ISBN code was designed to detect: (a) any single error (b) any double error created by a transposition IV054 1. Basics of coding theory and linear codes 75/77 Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 such that £/=i(n - />/ = 0 (modll) The publisher has to put xio = X if xio is to be 10. The ISBN code was designed to detect: (a) any single error (b) any double error created by a transposition IV054 1. Basics of coding theory and linear codes 75/77 Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 such that £/=i(n - />/ = 0 (modll) The publisher has to put xio = X if xio is to be 10. The ISBN code was designed to detect: (a) any single error (b) any double error created by a transposition Let X = xi... xio be a correct code and let Y = xi... xj-iyjXj+i ... xio with yj = xj + a, a ^ 0 IV054 1. Basics of coding theory and linear codes 75/77 Each book till 1.1.2007 had linternational Sstandard BOKO Number which was a 10-digit codeword produced by the publisher with the following structure: / p m w = xi... xio language publisher number weighted check sum 0 07 709503 such that E/=i(n - '>/ = 0 (modll) The publisher has to put xio = X if xio is to be 10. The ISBN code was designed to detect: (a) any single error (b) any double error created by a transposition Let X = xi... xio be a correct code and let Y = xi... xj-iyjXj+i ... xio with yj = xj + a, a ^ 0 In such a case: E;=i(H " i)Yi = E/=i(H " '>/ + (11 "7> / 0 (modll) IV054 1. Basics of coding theory and linear codes 75/77 The ISBN-code II Transposition detection IV054 1. Basics of coding theory and linear codes 76/77 The ISBN-code II Transposition detection Let Xj and Xk be exchanged. E;=i(" " i)Yi = E/=i(H " '>/ + (k -J)*J + U ~ k)*k = (k -j)(xj ~xk)^Q (modi!) IV054 1. Basics of coding theory and linear codes 76/77 The ISBN-code II Transposition detection Let Xj and Xk be exchanged. E;=i(" " i)Yi = E/=i(H " '>/ + (k -J)*J + U ~ k)*k = (k -j)(xj ~xk)^Q (modll) if k ^ j and Xj ^ Xk. IV054 1. Basics of coding theory and linear codes 76/77 New ISBN code IV054 1. Basics of coding theory a New ISBN code Starting 1.1.2007 instead of 10-digit ISBN code a 13-digit ISBN code is being used. IV054 1. Basics of coding theory and linear codes New ISBN code Starting 1.1.2007 instead of 10-digit ISBN code a 13-digit ISBN code is being used. New ISBN number can be obtained from the old one by preceding the old code with three digits 978. IV054 1. Basics of coding theory and linear codes 77/77 New ISBN code Starting 1.1.2007 instead of 10-digit ISBN code a 13-digit ISBN code is being used. New ISBN number can be obtained from the old one by preceding the old code with three digits 978. For details about 13-digit ISBN see htts://en.wikipedia.org/wiki/International_Standard_Book_Number IV054 1. Basics of coding theory and linear codes 77/77