I have a script where I perform some operations on bits. It is actually an attempt to display the operations performed in round 3, in the MD4 hashare algorithm. But I don't think this is very important.
The problem is that, in some cases, operations like XOR or shifting bits to the left, generate me signed(negative) results.
For example, I had the initial version of code . Here, in interaction 9 I get F as a binary number, but with sign in front, which apparently is not correct and possible.
let A = 0x44496174;
console.log(`A = ${A.toString(2)}, ${A}`);
let B = 0x65737461;
console.log(`B = ${B.toString(2)}, ${B}`);
let C = 0x72655243;
console.log(`C = ${C.toString(2)}, ${C}`);
let D = 0x32303234;
console.log(`D = ${D.toString(2)}, ${D}`);
for (var i = 0; i < 16; i++) {
var AA = A;
var BB = B;
var CC = C;
var DD = D;
var F;
if (i % 4 === 0) {
F = B ^ C ^ D ;
} else if (i % 4 === 1) {
F = A ^ B ^ C;
} else if (i % 4 === 2) {
F = D ^ A ^ B;
} else {
F = C ^ D ^ A;
}
var P = (A + F) >>> 0;
var P2 = (P + i) >>> 0;
var result;
if (i % 4 === 0) {
result = ((P2 << 3) >>> 0)
} else if (i % 4 === 1) {
result = ((P2 << 7) >>> 0)
} else if (i % 4 === 2) {
result = ((P2 << 11) >>> 0);
} else {
result = ((P2 << 19) >>> 0)
}
A = DD;
D = CC;
C = BB;
B = result;
console.log(`Iteration ${i + 1}:`);
console.log(`F = ${F.toString(2)}, ${F}`);
console.log(`P = ${P.toString(2)}, ${P}`);
console.log(`P2 = ${P2.toString(2)}, ${P2}`);
console.log(`Result = ${result.toString(2)}, ${result}`);
console.log(`A = D binar = ${A.toString(2)}, A decimal ${A}`);
console.log(`B = A(F) binar = ${B.toString(2)}, B decimal ${B}`);
console.log(`C = B binar = ${C.toString(2)}, C decimal ${C}`);
console.log(`D = C binar = ${D.toString(2)}, D decimal ${D}`);
console.log("*************************************************");
}
Iteration 9:
F = -1101100100010011001110110000000, -1820958080
P = 10010001101111001110110100100000, 2445077792
P2 = 10010001101111001110110100101000, 2445077800
Result = 10001101111001110110100101000000, 2380753216
A = D binar = 1110111010110100101001010000000, A decimal 2002408064
B = A(F) binar = 10001101111001110110100101000000, B decimal 2380753216
C = B binar = 11000001001110000000000000000000, C decimal 3241672704
D = C binar = 100101000101000011000000000000, D decimal 622080000
According to the algorithm, at iteration 9 F is calculated according to the formula F = B ^ C ^ D, A, B,C, D are taken from the previous iteration*
So we have
A = 11111110010001101000101010100000
B = 11000001001110000000000000000000
C = 100101000101000011000000000000
D = 1110111010110100101001010000000
According to the windows calculator, 11000001001110000000000000000000 XOR 100101000101000011000000000000 XOR 1110111010110100101001010000000 = 10010011011101100110001010000000, but my code gives me -1101100100010011001110110000000 (-1820958080 in decimal)
Reading on the internet, I saw that it is recommended to move the bits to the right. without sign, i.e. > > > 0, modifying the code like this
if (i % 4 === 0) {
F = (B ^ C ^ D) >>> 0 ;
} else if (i % 4 === 1) {
F = (A ^ B ^ C) >>> 0;
} else if (i % 4 === 2) {
F = (D ^ A ^ B) >>> 0;
} else {
F = (C ^ D ^ A) >>> 0;
}
Apparently, it worked, but now I have another problem. The last calculation in the algorithm, involves a bit shift to the left, with 3, 7, 11 or 19. Well, here in some cases also negative results appear (with sign - in front), but already > > > 0 no longer works.
For example, iteration 3 implies an 11 bit left shift of the variable P2. So we have
P2 <<< 11 = 10010101111111011000110101100111 < < < 11 = 01001010111111101100011010110011100000000000
according to the windows calculator, instead the result in the script is 11101100011010110011100000000000, which is not the result I expected
How could I fix this, so that the calculations are done correctly ?