hash – explanation of hash Ketama

(Initially, I posted this on stackoverflow, but I thought it would be better here)

I'm trying to understand the Ketama hash code used in the consistent hash.

link and excerpt below:

public static Long md5HashingAlg (string key) {
MessageDigest md5 = null;
try {
md5 = MessageDigest.getInstance ("MD5");
} catch (NoSuchAlgorithmException e) {
log.error ("++++ no algorithm found md5");
throw the new IllegalStateException ("++++ no md5 algorithm found");
}
md5.reset ();
md5.update (key.getBytes ());
byte[] bKey = md5.digest ();
long res = ((long) (bKey[3] & 0xFF) << 24)
| ((long) (bKey[2] & 0xFF) << 16)
| ((long) (bKey[1] & 0xFF) << 8) | (long) (bKey[0] & 0xFF);

returns res;
}

I think I understand what the code does, but I do not understand why they do it. In particular, I wonder why:

  1. the code removes the least significant 8 bytes of the 16-bit MD5 and uses only the first four bytes (bKey[0] via bKey[3]).

  2. the code "returns" the significant bytes, which means that, on the 4 bytes of 1., the least significant ones now become the most significant (at least that's what I understand & 0xffs and shifts left).

I also came across another piece of code that uses the same logic as above, but also performs a & 0xffffffffL on the result to "truncate to 32 bits".

link and extract:

KETAMA_HASH case:
byte[] bKey = computeMd5 (k);
rv = ((long) (bKey[3] & 0xFF) << 24)
| ((long) (bKey[2] & 0xFF) << 16)
| ((long) (bKey[1] & 0xFF) << 8)
| (BKEY[0] & 0xFF);
Pause;
default:
to affirm false;
}

return rv & 0xffffffffL; / * Truncate at 32 bits * /

Could any one help me understand the reasons that motivated the selection of these bytes and their reorganization in this way?