I'm trying to deserialize a byte array in an Android app (compiled on MacOS with M1 chip). The bytes come from the network, which is generated by a C++ application running on Ubuntu. in the C++ application, I checked that it's using Little Endian:
bool isBigEndian()
{
uint16_t word = 1; // 0x0001
uint8_t *first_byte = (uint8_t *)&word; // points to the first byte of word
return !(*first_byte); // true if the first byte is zero
}
// Check:
if (isBigEndian())
printf("Big endian\n");
else
printf("Little endian\n");
Above code in C++ print out Little endian
On the Kotlin (Android) side, I also checked and see it's using Little Endian too, but the converted long number from the byte array is not correct.
fun isLittleEndian(): Boolean {
return ByteOrder.nativeOrder() == ByteOrder.LITTLE_ENDIAN
}
/**
* Represent 8 bytes of [Long] into byte array
*/
fun Long.toBytes(): ByteArray {
return ByteBuffer.allocate(Long.SIZE_BYTES).putLong(this).array()
}
fun ByteArray.toLong(): Long {
return ByteBuffer.wrap(this).long
}
fun test(){
val longNumber = 1000L
val isLittleEndian = isLittleEndian() // true
val bytes = longNumber.toBytes() // bytes: [0,0,0,0,0,0,3,-24] => Big Endian?
}
C++ app serializes the long number 1000 into [ -24, 3,0,0,0,0,0,0] (correct Little Endian ordering) while Kotlin code converts the same long number into [0,0,0,0,0,0,3,-24] (This is Big Endian ordering).
When converting the bytes from C++ app using Kotlin, I got strange value -1728537831980138496 instead of 1000
Please help to check if I made any mistake while dealing with the endianess?
The
allocateandwrapmethods will both always return aBIG_ENDIANbuffer. You have to callorderto change the endianness toLITTLE_ENDIAN.For example:
Output: