I am studying some computer knowledge by myself. I find a puzzle when I study the CRC. Now I know how to calculate a CRC value of a message, but I wonder how to verify it at the receiver when the rules require Refin.
Here is an example.
The message I want to check is 00010011.
The polynomial is 10011.
Init: 0x00
Refin: True
Reout: True
Xorout: 0x00
After calculation, the CRC value is 0100.
So the actual message to be delivered is 000100110100.
The receiver needs to verify the message with the CRC. How does it do this? There are two ways. First, it can compute the CRC via binary division, and compare the calculated and received CRC values. Second, it can directly append the CRC value to the data. The receiver then calculates using polynomial division and should get a zero remainder.
I have some questions here.
Probably because of Refin and Refout, I can not verify it directly using the polynomial division — it has non-zero remainder.
If I want to obey the same rules to verify the message + CRC, 000100110100, it only has 12 bits, not enough for two bytes, so I can't complete the Refin. I tried to append zeros on the MSB side as well as the LSB side, but in the division process I still can not get all zeros remainder.
How does the receiver verify the CRC?


You know the message on the wire ends with the CRC. Get the received message, without CRC, then compute its CRC using precisely the same algorithm as the transmitter used.
Then compare the CRC you just computed with the one you received.
If they differ the message is corrupt -- has errors introduced by the comm channel from sender to receiver.
For what it's worth, messages sent over commonly used internet protocols have this sort of error detection thing built in. CRC is most useful over old-school analog synchronous serial communication protocols. And, in the 21st century you might be wise to use some sort of forward error correction code (FECC) protocol instead of a dirt-simple go- no go- protocol like CRC.
You can read about FECC.