ram – Does badblocks write it first to the entire disc, then reads it and compares it to what he wrote?

I remember a long time ago that I encountered a memory error in an IEEE 488 card in which the error was not found in the memory chips, but in the logic of addressing that controlled them. The usual tests of the first writing patterns 0x00, 0xff, 0x55 and 0xaa to the entire memory to be tested, then reread it and check the correct values, did not detect the error.

If there is an address defect in writing and consistently reading a place already written or read with the test pattern, you get the same result if the same pattern does not vary. Therefore, the -t random variant of badblocks is the most important.

Addressing logic errors tend to occur in repetition lengths of power equal to two, depending on the capacity of the memory chips used. There must be models that do not repeat themselves in pairs. In my particular case, I proved the presence of the addressing error by writing a pattern 0, 1, 0, 1, 2, 1, 0, 1, 2, 3, 2, 1, 0, 1, ... current_bound-1, current_bound, current_bound-1, current_bound-2, ..., 1, 0, 1 , 2, .... ascending and descending more and more current_bound. This model could be implemented very easily using only the processor registers and did not require any additional memory, which was supposed to work perfectly. In addition, my program was able to move such a configuration of given quantities or apply it in the reverse order. Later, it was also used to test the main memory of the system.

Now I am having a problem with a USB key. Of course, I could throw it in the trash, but I want to know: Is there an addressing error? Is there less memory in the reader than it claims to contain?

I found the Unix utility badblocks who has the opportunity to use a random pattern. This seems useful for checking such addressing errors – if the random pattern is first written in all memory then read (possibly regenerating parts while testing the replayed parts of the memory to be tested), since a lot more RAM than expected in my system.

Therefore my question is: Is badblocks with option -t random start by writing the random pattern in the entire memory to be tested, and then re-read what he wrote, regenerating parts of the random pattern to compare what he reads during the reading phase of the test?

It is likely that writing on block numbers in ascending order occurs badblocks as this is implemented most easily. When writing in ascending order, it sometimes happens that several address lines for the blocks change state, eg. Eg when transitioning from an 0x0ff address to 0x100, eight address lines change state and the probability of a simultaneous change is higher for low address lines than for high address lines. One could consider that the error probabilities of the address logic are higher if modifications of a plurality of address lines are involved. Thus, instead of proceeding in ascending order of block numbers, it is advisable to consider making as many changes of address at a time, e.g. write in ascending order to block numbers 0 ,! 0, 1 ,! 1, 2 ,! 2, ... or ! Indicates the inversion of all addressing bits used in addressing. Does anyone know a test like badblocks that also includes a test pattern written in the block numbers so that changes to many address lines are more common than when you do not have the ######################################################################### Writing in a strictly increasing order?

Of course, a random pattern that can be rebuilt is not really random, but at least it should be random enough to prevent any repetition with a power length of two related to the usual storage capacities of the memory chips.

Of course, it would also be desirable to add a test which is not random, but which guarantees the repetition of free forms of lengths 2 n for a specific range of n values.