mergesort implements what you call the "divide and conquer" paradigm.

It divides an array into subnets until they reach a length of one, then it will start calling merge ()

The resulting arrays based on the order of fractionation and derivation can be organized into something like a binary tree.

There are at most log (n) levels / layers inside the tree, and each element is affected by a constant number of operations on each level.

Therefore, it works in n times log (n), so, neglecting the constants, you have an algo O (nlogn) (time complexity)

We know that the mergesort traditionally divides the tables in two.

That is, if you have an array of length n, then if it divides into x and n – x, then x = n-x.

But in fact, if for example, you decided to trisect the array, it means that you will have to call merge () twice after the return of recursive calls for a call to mergesort ()

This means that even if you have fewer levels, you still call merge () as many times, which means that breaking an array into an arbitrary number of equal parts leaves performance unchanged.

I was wondering, how would this affect performance, if I split the array in 2, but not exactly by bisecting it.

Let's say I called mergesort () on 2 parts of an array of length n, one with x, the other with n-x elements.

One will occur at O (xlogx), the other will occur at O ((n-x) log (n-x))

If I made the naive assumption that I can measure the approximate approximate performance of the 2 calls together by adding the 2, then I get:

xlogx + (n-x) log (n-x)

If you put a constant number in place of n and graph it, you can see that the graph *always have a minimum where x = m*

It made me think that the 2 mergesort calls together work better when they bisect the array exactly, and worse if not.

1) Is it okay to add the 2 asymptotic performances like this to get a good estimate of imprecise performance?

2) Is the bisection of the berries in mergesort the best in terms of performance?