In computer science, a block code is a type of channel coding. Computer science (or computing science) is the study and the Science of the theoretical foundations of Information and Computation and their In Computer science, a channel code is a broadly used term mostly referring to the Forward error correction code and Bit interleaving in communication and It adds redundancy to a message so that, at the receiver, one can decode with minimal (theoretically zero) errors, provided that the information rate (amount of transported information in bits per sec) would not exceed the channel capacity. Redundancy in Information theory is the number of bits used to transmit a message minus the number of bits of actual information in the message Information as a concept has a diversity of meanings from everyday usage to technical settings A bit is a binary digit, taking a value of either 0 or 1 Binary digits are a basic unit of Information storage and communication In Electrical engineering, Computer science and Information theory, channel capacity is the tightest upper bound on the amount of Information
The main characterization of a block code is that it is a fixed length channel code (unlike source coding schemes such as Huffman coding, and unlike channel coding methods like convolutional encoding). History In 1951 David A Huffman and his MIT information theory classmates were given In Telecommunication, a convolutional code is a type of Error-correcting code in which (a each m - Bit Information symbol (each Typically, a block code takes a k-digit information word, and transforms this into an n-digit codeword.
Block coding was the primary type of channel coding used in earlier mobile communication systems. In Computer science, a channel code is a broadly used term mostly referring to the Forward error correction code and Bit interleaving in communication and
A block code is a code which encodes strings formed from an alphabet set S into code words by encoding each letter of S separately. In Communications a code is a rule for converting a piece of Information (for example a letter, Word, Phrase, or Let be a sequence of natural numbers each less than | S | . In Mathematics, a natural number (also called counting number) can mean either an element of the set (the positive Integers or an If and a particular word W is written as , then the code word corresponding to W, namely C(W), is
The trade-off between efficiency (large information rate) and correction capabilities can also be seen from the attempt to, given a fixed codeword length and a fixed correction capability (represented by the Hamming distance d) maximize the total amount of codewords. Examples The Hamming distance between 1011101 and 1001001 A[n,d] is the maximum number of codewords for a given codeword length n and Hamming distance d.
When C is a binary block code, consisting of A codewords of length n bits, then the information rate of C is defined as
When if the first k bits of a codeword are independent information bits, then the information rate is
Block codes are tied to the sphere packing problem which has received some attention over the years. In Mathematics, sphere packing problems are problems concerning arrangements of non-overlapping identical Spheres which fill a space In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is,) the dimensions refer to the length of the codeword as defined above.
The theory of coding uses the N-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so called perfect codes. There are very few of these codes.
Another item which is often overlooked is the number of neighbors a single codeword may have. Again, let's use pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. When we increase the dimensions, the number of near neighbors increases very rapidly.
The result is the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers.