Why is stability considered a desirable feature of a sorting algorithm?

The current argument of stability in a sort algorithm usually involves an example in which a list is sorted according to two criteria. For example:

1,4,5,7,2,6,8,9,15,65,24,27
sort by order of magnitude, then by value
2,4,6,8,24,1,5,7,9,15,27,65

The argument is that by choosing a stable sort algorithm, you can sort this list twice – by value, then by plan – and you will get the sorted list as you wish.

I could not be more at odds with this ideology, though. First of all, sorting is performed at the back (value, regularity, regularity being the main criterion), which is not intuitive. Second, by doing this, you call sort () two times.

Let's see a documentation now. We have qsort (3) of JavaScript C and Array.prototype.sort. As far as I know, these two functions implement unstable sort algorithms …

If two members compare equally, their order in the sorted array is not defined.

and

If compareFunction (a, b) returns 0, leave a and b unchanged relative to each other, but sorted by all items. Note: The ECMAscript standard does not guarantee this behavior and all browsers (for example, Mozilla versions dating back to at least 2003) do not respect it.

… and both accept a function as an argument. I believe this function calls a comparator – a function that takes two values ​​A and B and returns -1, 0 or 1 depending on whether A is considered "less than", "equal to" or "greater than" B ", based on the arbitrary criteria chosen by the implementer.

That said, what I found is that whatever I attach to the respective sort functions, whether I implement them myself or use it with the standard library, is that the stability has absolutely no effect on the sort result when the sort function is used correctly.

Let's use the C qsort for example. qsort implements fast sorting and is known to be unstable.

If two members compare equally, their order in the sorted array is not defined.

To clarify, this does not mean that the implementation is unstable in itself. It means that stability is not guaranteedSo relying on these semantics is a very bad idea. Which is close enough.

#understand 
#understand 
#understand 

#define INT (p) 
(* ((int *) (p)))

#define ISEVEN (p) 
(INT (p)% 2 == 0)

empty
randomize (int * list, size_t len)
{
for (size_t i = 0; i <len; ++ i)
listing[i] = rand ()% (len * 10);
}

empty
printlist (int * list, size_t len)
{
for (size_t i = 0; i <len; ++ i)
printf ("% i,", list[i])

putchar (# 39;  n);
}

int
by_even (void const * a, void const * b)
{
return (ISEVEN (a) &&! ISEVEN (b))? (-1): (ISEVEN (b) &&! ISEVEN (a));
}

int
by_value (void const *, void const * b)
{
return (INT (a) < INT(b)) ? (-1) : (INT(a) > INT (b));
}

int
by_even_and_value (void const * a, void const * b)
{
return by_even (a, b)! = 0? by_even (a, b): by_value (a, b);
}

int
main (empty)
{
static size_t const listsz = 20;
int list[listsz];

srand (time (NULL));
randomize (list, listsz);
print list (list, listsz);
qsort (list, listsz, sizeof list[0], by_even_and_value);
print list (list, listsz);

returns 0;
}

Here's the result:

$ cc qsort.c
$ ./a.out
100, 111, 12, 122, 96, 50, 52, 96, 173, 125, 135, 173, 78, 144, 108, 60, 75, 116, 24, 180,
12, 24, 50, 52, 60, 78, 96, 96, 100, 108, 116, 122, 144, 180, 75, 111, 125, 135, 173, 173,

So put all the sort criteria in the comparator and sort once gave me the sorted list that I wanted. It only took one sort, and the criteria could be given in order (even first, second value).

Since this makes the stability seem irrelevant to the result, why deal with the stability of a sorting algorithm?