In Computing we store data in bytes (8 bits), and where each byte has its own memory location. For an integer value, we might store it as 8 bits, 16 bits, 32 bits or 64 bits (8 bytes). So in which order to we store the bytes. Well, that all depends on the computer system we use. With Intel x86, we use store the least significant byte in the first location, and the most significant byte in the last location, whereas with main frames a big-endian format is often used. The way we interpret the order of the values is thus important, and where we have a byte array, and then interpret the first byte as the least significant byte, and the last byte in the array as the most significant byte. In the following we convert a hex value, into a byte array and then determine the unsigned integer value for little-endian and big-endian:
Hex to integer (Big-endian and Little-endian) |
Coding
The coding is here:
import binascii import sys a="ff" size=32 if (len(sys.argv)>1): a=str(sys.argv[1]) if (len(sys.argv)>2): size=int(sys.argv[2]) def pad(mystr): padding_size=2*size mystr = mystr + "0"*(padding_size - len(mystr)) return mystr a=pad(a[:2*size]) a1=int.from_bytes(binascii.unhexlify(a),byteorder='little') a2=int.from_bytes(binascii.unhexlify(a),byteorder='big') print(f"Value: {a}\nNumber of bytes: {size}\nLittle-endian: {a1}\nBig-endian: {a2}")
A sample run:
Value: ff00000000000000000000000000000000000000000000000000000000000000 Number of bytes: 32 Little-endian: 255 Big-endian: 115339776388732929035197660848497720713218148788040405586178452820382218977280