Channel encoding:Error correction

Error correction

The introduction of redundancy bits to a package of data increases the data length and with it the number of possible combinations. Consider a 6-bit package consisting of 4 bits of useful data and 2 redundancy bits.

Channel encoding-0566

The 4 bits of useful data contain 24 = 16 different valid messages. At the receiving end, however, a set of 26 = 64 different messages may be received, of which only a sub-set contains the 16 valid messages. This sub-set is called a code and the valid messages are called code words or code vectors (vectors for short). When a message is received that does not correspond to any of the valid codes, the receiver finds a valid code word ‘nearest’ to the received message, on the assumption that the nearest is the most likely correct mes- sage in the same way as the corrupted word in the sentence ‘the fishing boat saiked in the morning’ is easily deduced to be ‘sailed’ being the nearest valid replacement. This technique is known as forward error correction (FEC).

In practice, real codes contain very long strings of binary bits with thousands of valid code words requiring carefully designed coding tech- niques to produce code words which are well structured into sets and sub-sets. The process of correction is then carried out by advanced and sophisticated mathematics. There are two coding techniques: block coding and convolutional coding. In block coding, such as Hamming or Reed–Solomon codes, a block of k data digits is encoded by a code gener- ator into a code word of n digits where n is larger than k. The number of redundancy bits is therefore (n k). The ratio k/n represents the efficiency of the code and is normally known as the code rate. In convolutional codes, the coded sequence from the encoder depends not only on the sequence of the incoming block of k bits, but also on the sequence of data bits that preceded it. Unlike block codes, the code word of a convolutional code is not unique to the incoming k bits, but depends on earlier data as well.

For convolutional codes, k and n are usually small, giving small code rates such as 1/2, 2/3, 3/4 and 7/8.

Convolutional codes invariably outperform block codes, especially for correcting random and burst errors. One of the more efficient algorithms for decoding convolutional codes was devised by Viterbi with especially good results in correcting random channel errors. DTV uses both block coding (Hamming and Reed–Solomon) and convolutional coding.

Leave a comment

Your email address will not be published. Required fields are marked *