Execution analysis – Are all hypothetical machine models used to calculate the running time of an algorithm the same?

In the RAM model, which is very close to our computers, standard arithmetic operations and tasks take the same time, regardless of the values ​​insofar as they fall within the word computer. This gives a good estimate of the behavior of the execution and the algorithm. When an operation takes $ c $ you can provide it in asymptotic analysis or omit it, because such an analysis takes into account the length of the input as it grows.

For fine calculations and small entries it is wise to check whether $ mathcal O (5n ^ 2) $ is better than $ mathcal O (100n) $. In asymptotic cases, the constants are dominated by the input length. This is why there is no difference between 3 and 20,000. Using 1 as a unit of time therefore facilitates the calculations.

It is up to the analyzer to determine the degree of fineness of the analysis, but also which model is chosen, the differences are enormous. The difference between the RAM model and the Turing Machine model is polynomial. Between RAM and Real, the RAM or BSS machine is vast, instead of integers the size of a word, real or arbitrary numbers can be stored with operations taking a single unit of time. This creates a huge computing gap. The choice of the model is therefore crucial and the models are far from being similar.

Convention is to use the smallest plausible model.
If you are sorting whole numbers in a transdichotomic model, the size of the numbers does not matter, but in RAM, the unit cost is applied if they fit the size of the words or more, otherwise.