In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word, usually the one for which the error vector has minimum Hamming weight is chosen. In this note some remarks are made on the problem of the elements 1 in the error vector, that may enable unique decoding, in case two or more code words have the same Hamming distance to the received message word, thus turning error detection into error correction. The essentially new aspect is that code words, message words and error vectors are put in one-one correspondence with graphs.
|Publisher||Department of Applied Mathematics, University of Twente|