## time complexity – is there a name for the most efficient class of algorithms for a particular task?

This would be analogous to the Kolmogorov complexity of a chain, except that in this case, I am interested in the algorithm which solves a given problem using the least number of steps.

It should therefore be able to show that any other algorithm is of the same order of complexity as the algorithm in question.

I'm asking because I'm working on a document that uses this concept, and I was surprised when I realized that I didn't know any names for this concept, although I admit that I may 39; be embarrassed if there is such a name, which I do not know.

## algorithms – Number of subsets

Given a number n, find all the subsets of {1, 2, .. n} with the condition by taking 2 elements from a subset, the absolute difference is greater than 1.
I found this problem but I have no idea how to fix it, not even recursively. This can be done with dynamic programming.

Entrance: 5
Exit: 12

The subsets are: {1}, {2}, {3}, {4}, {5}, {1,3}, {1,4}, {1,5}, {2,4}, {2, 5}, {3.5}, {1,3,5}.

## algorithms – Converting a function with a single parameter to a function with several parameters

I recently solved algorithm issues and a scheme that I have observed in some problems is as follows:

Given a string or a list, perform an aggregation operation on each of its elements. Here, in each of these elements, we apply a certain recurrence to solve it.

An example of one of these problems is shown below.

Problem: Since n integers return the total number of binary search trees that can be trained using n integers

To solve this problem, I define a recurrence relation as follows:

``````f(n) = 1 // if n = 0
f(n) = ∑ f(i) * f(n-i-1) where 0 <= i <= n-1
``````

It works and I get the correct answer but I want to change the function a bit.

Instead of expressing function in terms of `f(n)` I want to express it in terms of `f(n, i)` so that I can remove the summons. However, I cannot do it properly.

Coded

My code for solving the problem by defining the recurrence in terms of f (n) is as follows: (I know it can be optimized by DP but that is not what I am trying to do here)

``````public int f(int n) {
if(n == 0)
return 1;

int result = 0;
for(int i = 0; i< n; i++)
result += f(i) * f(n-i-1);
return result;
}
``````

I want to remove this for the loop and express the function in terms of `f(n,i)` instead of `f(n)`.

Question

1. How to convert the above recurrence from `f(n)` at `f(n,i)` and remove the summons?
• Here, "n" is the size of the list of items and "i" is the ith item in the list that we choose to be the root of the tree.

## algorithms – Median of the difference table

Given an array of $$A = (a_i)$$ with $$n$$ elements, find the median of the
table (fictitious) $$B = (b_ {ij})$$, or $$b_ {ij} = | a_i-a_j |$$.

The obvious solution would be to use a deterministic algorithm for finding the median in linear time on the construction of table B, which would give us a complexity of $$O (n ^ 2)$$. Is there a way (probably a divide and conquer approach and a linear median)? We could get a time complexity of $$O (n log n)$$?

## algorithms – How to group N sets into N subsets, so that we can determine from which set a point comes by checking its nearest neighbor in the above-mentioned subsets?

Question 1:

N sets of points $$S_1$$$$S_n$$ (no intersection between $$S_i$$ and $$S_j$$ when I! = j),

I want to find subsets of $$S_1$$$$S_n$$ (call them $$T_1$$$$T_n$$ respectively)

So that for any point $$S_k$$ , the nearest neighbor $$bigcup_ {i = 1} ^ {n} T_ {i}$$ is in $$T_k$$,

obviously $$T_k$$ = $$S_k$$ (k = 1, …, n) might suffice.

Now how to find the subsets $$T_1$$$$T_n$$ so that | $$bigcup_ {i = 1} ^ {n} T_ {i}$$ | is minimum?

Question 2:

N sets of points $$S_1$$$$S_n$$ (no intersection between $$S_i$$ and $$S_j$$ when I! = j),

I want to find sets of points $$T_1$$$$T_n$$ (no intersection between $$T_i$$ and $$T_j$$ when I! = j, $$T_k$$ may or may not be a subset of $$S_k$$ )

So that for any point $$S_k$$ , the nearest neighbor $$bigcup_ {i = 1} ^ {n} T_ {i}$$ is in $$T_k$$,

obviously $$T_k$$ = $$S_k$$ (k = 1, …, n) might suffice.

Now how to find the subsets $$T_1$$$$T_n$$ so that | $$bigcup_ {i = 1} ^ {n} T_ {i}$$ | is minimum?

(The distance can be any type of Minkowski distance, such as the distance from Manhattan, the Euclidean distance or the distance from Chebyshev)

## algorithms – Combine sorting by fusion with sorting by insertion – Time complexity

I learn the algorithms from the CLRS book by myself, without any help. It includes an exercise which combines sorting by fusion {O (n log n)} with sorting by insertion {O ($$n ^ {2}$$)}. It indicates that when the fusion sorting subarrays reach a certain size "k", it is preferable to use insertion sorting for these subarrays instead of the fusion sorting. The reason given is that the constant factors in the sorting by insertion make it fast for small n. Can anyone explain this?

It asks us to show that (n / k) sublists, each of length k, can be sorted by insertion sort in the worst case O (nk). I found somewhere that the solution to this was O ($$nk ^ {2} / n$$) = O (nk). How do we get this part O ($$nk ^ {2} / n$$)?

Thank you !

## algorithms – Sliding puzzle with several solutions

I am trying to write an algorithm that produces a solution to a modified n by n sliding puzzle (assuming that a final state is accessible from the given starting state). The change is as follows: the tiles can belong to sets with other tiles and can also occupy any of their positions, and the sets are disjoined. The image below shows an example on the right which is in one of the many final states. A tile labeled with x numbers can be in any of the x positions with which it is labeled. For example:

on the left we have 3×3 of: $$left lbrace1 right rbrace, left lbrace2 right rbrace, left lbrace3 right rbrace, left lbrace4 right rbrace, left lbrace5 right rbrace, left lbrace6 right rbrace, left lbrace7 right rbrace, left lbrace8 right rbrace,$$ and
right we have a 3×3 $$left lbrace1,4,6 right rbrace, left lbrace2,8 right rbrace, left lbrace3 right rbrace, left lbrace5 right rbrace, left lbrace7 right rbrace.$$

My current thinking is to treat each tile as being in its own set and to exhaustively check if the initial state is solvable – in this example, labeling the tiles in positions $$1$$ (on the top corner left)$$, 2, 4, 6,$$ and $$8$$ as respectively $$left lbrace1 right rbrace, left lbrace2 right rbrace, left lbrace4 right rbrace, left lbrace6 right rbrace, left lbrace8 right rbrace$$, then $$left lbrace4 right rbrace, left lbrace2 right rbrace, left lbrace6 right rbrace, left lbrace1 right rbrace, left lbrace8 right rbrace$$, etc. and running an existing algorithm for the original puzzle if a solvable state is found.

## optimization – Satisfaction of constraints in an optimized resource allocation problem using scalable algorithms

I'm working on an allocation problem where a resource $$R$$ must be allocated to $$n$$ users each with request $$d_i$$. The problem has two objectives: objective 1 – maximize the usefulness of the user given by $$f_1 (e_i) = log (1 + e_i / d_i)$$ and objective 2 – minimize $$f_2 (e_i) = {e_i} ^ 2 / c_i$$ or $$e_i$$ is the allocation to $$ith$$ user $$d_i$$ and $$c_i$$ are constants. The problem has two constraints, namely: $$e_i le d_i$$ (the allowance is ≤ demand) and $$sum_i ^ {n} e_i = R$$ (the sum of all allocations is equal to the total resource available). I am using a scalable algorithm for this problem. But, once the evolution is complete, the algorithm leaves an unallocated resource while there is still an unmet user demand. I'm curious to know if this is an expected result or if it is due to an algorithmic or implementation flaw. I would appreciate any advice

## algorithms – If I have an STD and add an edge to create a cycle, will removing the heaviest edge from that cycle result in an STD?

Let's say I have an STD, $$T$$. I take an edge not in it $$T$$ and change its weight, and add it to $$T$$ to create a cycle. Will removing the heaviest edge from this cycle cause an STD?

MST stands for minimum tree covering a graph. I came across these two posts:

and I'm both up just in case $$w_ {old}> w$$ and $$e notin T$$. They both say that removing the heaviest edge will guarantee an STD, but I don't see how to prove it. The cycle property simply indicates that IF you have an STD, it cannot have an edge which is the heaviest edge of a cycle of the original graph $$G$$; this does NOT mean that IF you have a tree which does not contain an edge which happens to be the heaviest edge of a cycle in the original graph $$G$$, you are an STD.

To make the question more explicit in relation to the problem she was trying to solve, I will copy part of the first link:

If its weight has been reduced, add it to the original STD. This will create a cycle. Scan the cycle, looking for the heaviest edge (this could select the original edge again). Remove this edge.

I don't understand why this guarantees that we find an STD. Sure, we get a spanning tree, but why does removing this heavier edge result in a MINIMUM spanning tree?

## algorithms – Is it possible to keep the weights of the left and right subtrees at each node of BST which has duplicate values?

Is it possible to keep the weights of the left and right subtrees at each node
BST that has duplicate values?

I need to be able to delete a node completely (no matter how many times it is present)

Currently, in my code, I keep a count variable in each node that records the number of times it is present in the tree.

When inserting, I can increase the size of the weight of the left and right subtrees at each node depending on whether my value is lower or higher. but how to adjust the weights when I delete a node (because I can delete a node with a number> 1)