As pointed out by Coriiander, most (if not all) of those codes here will be optimized away at compilation time, so the generated binaries won't check "endianness" at run time.
It has been observed that a given executable shouldn't run in two different byte orders, but I have no idea if that is always the case, and it seems like a hack to me checking at compilation time. So I coded this function:
#include <stdint.h>
int* _BE = 0;
int is_big_endian() {
if (_BE == 0) {
uint16_t* teste = (uint16_t*)malloc(4);
*teste = (*teste & 0x01FE) | 0x0100;
uint8_t teste2 = ((uint8_t*) teste)[0];
free(teste);
_BE = (int*)malloc(sizeof(int));
*_BE = (0x01 == teste2);
}
return *_BE;
}
MinGW wasn't able to optimize this code, even though it does optimize the other codes here away. I believe that is because I leave the "random" value that was alocated on the smaller byte memory as it was (at least 7 of its bits), so the compiler can't know what that random value is and it doesn't optimize the function away.
I've also coded the function so that the check is only performed once, and the return value is stored for next tests.