The Bitcoin protocol is designed so that the difficulty of the network is adjusted periodically so that new blocks are extracted on average every 10 minutes. However, looking back over the past 10 years, it appears that the average extraction interval observed is 567.35 seconds. I calculated this by subtracting the timestamp from block #. 573 795 with the timestamp of block no. 0 and dividing by 573.794. I know that block timestamps are not so accurate, but this error should be negligible.

The bitcoins extraction can be modeled as a Poisson process with some simplifying assumptions. If I did the calculations well, the 95% confidence interval of a Poisson process with λ = 1.0575 (average historical number of blocks extracted every 600 seconds) and n = 573794 (current number of historical intervals) allows a *real* extraction interval not exceeding 568.1 seconds, error limit very far from the 600-second block interval predicted by network difficulty.

I know at least one of the reasons that there is a difference, it is the adjustment interval of the block difficulty of 2016. This was discussed and answered in a relationship question here . But is this the *only* why historical block intervals are shorter than mathematically predicted block intervals? If we observe a long period of steady decline in the network hash rate, would we see a similar opposite effect, in which the observed block intervals would consistently exceed 600 seconds?

The final question is this … for block intervals in the future, is it better to assume a target block interval of 600 seconds or an interval slightly lower or higher than this? If yes how much? This has implications, for example, for the design of predictive time series models in finance.