The following problem is from a problem set of MIT OCW's 6.004 Computation Structures. The problem is on page 10 in the section "Error Detection and Correction".

C. A baseball coach liked to record the umpire's call for each pitch (one of "strike", "ball", or "other"). Worried that bit errors might corrupt the records archive, he proposes using the following 5-bit binary encoding for each of the three possible entries: Strike 11111, Ball 01101, Other 00000.

Using this encoding, what is the largest number of bit errors that can be detected when examining a particular record? What is the largest number of bit errors that can be corrected?

First of all, it seems the two questions above are worded in an imprecise manner.

The solution given by MIT OCW is:

enter image description here

I take this to mean that single-bit errors can be detected by no errors of any number of bits can be corrected.

This seems to come from the results that to detect E errors we need min HD = E + 1, where HD is the Hamming distance, and to correct E errors we need min HD = 2E + 1.

In the specific case of our encoding of pitches, the Hamming distances are 3 between 00000 and 01101, 5 between 00000 and 11111, and 2 between 01101 and 11111. Thus, min HD = 2, and by applying the previous equations we obtain that we can detect E = 1 bit and correct E = 0 bits.

Consider the following depiction of all the possible 5-bit strings

enter image description here

The three pink strings are our encodings for pitches. It seems that we would be able to both detect and correct the green bit strings if we assume they are single-bit errors.

Is this last statement true? And if so, why is the result of the original problem seemingly that we can't correct any errors?

0

There are 0 best solutions below