I've written a small function to randomly generate a list of values that total a total, in this case 100; written for some reason. After aggregating the lists and averaging between the indices, I found that the averages showed a trend.
Here 1000 lists, of length 10, where the sum of all the values is 100, have been generated and the average value for each index has been calculated.
$$ l_1 = (50.35, 25.11, 12.33, 6.3, 2.92, 1.47, 0.75, 0.38, 0.18, 0.08) $$
The pattern I've found is that the average value at a specific index is always about half of the last, and the first average is half the total sum of the list.
For example, here is the same calculation on 1000 lists, of length 4, with a sum of list of 50.
$$ l_2 = (24.774, 12.683, 6.265, 3.223) $$
To be complete, here is a Python SSCCE that generates the lists and calculates the averages
import random def genlist(len, total): r = () for i in range(0, len): if total == 0: r.append(0) continue v = random.randint(0, total) r.append(v) total -= v return r list_of_lists = (genlist(4, 50) for i in range(0, 1000)) transpose_lol = zip(*list_of_lists) avg_index_value = (float(sum(x)) / len(x) for x in transpose_lol) print(avg_index_value)
Why does this model emerge?
I can see it roughly, just in the way that the "bins" that are the indexes of the list are filled from an increasingly smaller pool. That said, it gives the impression that the random value of the first index is always "big"; it is just as likely that he is small too. So, I keep telling myself that the average indexes in the list should always be fair