## algorithm – reducing temporal complexity

Here is the link of the problem
and here is my code,

``````#understand
int main ()
{
int i, j, k, t, n, c = 0;
int num[105]= {0};
num[0]= 0;
scanf ("% d", & t);
for (j = 1; j <= 45360; j ++)
{
c = 0;
for (k = 1; k <= j; k ++)
{
if (j% k == 0)
{
c ++;
}
}
if (num[c]== 0)
{
num[c]= j;
}
}
for (i = 0; i <t; i ++)
{
scanf ("% d", & n);
printf ("% d  n", num[n])
}
returns 0;
}
``````

The code gives me TLE (time limit exceeded). Is there a better algorithm or procedure to solve this problem? Or, is there a theory of numbers that I can apply to solve it? I think my procedure is not correct, although it gives me a TLE.

## python – Is the complexity of this code temporal?

Explanation of the algorithm: based on an unsorted list, I want to find the indexes of values ​​in another sorted list. Note that all values ​​are unique and both lists have the same values, but in a different order, for example:

``````# O (log n)
def binary_search (data, value):
n = len (data)
left = 0
right = n - 1
keeping left <= right:
middle = (left + right) / 2
if value < data[middle]:
right = middle - 1
elif value > The data[middle]:
left = middle + 1
other:
back in the middle
raise ValueError ('The value is not in the list & # 39;)

# O (n log n)
def find_indexes (data1, data2):
return [binary_search(data2, value) for value in data1]

if __name__ == __ hand __:
data1 = [9, 1, 8, 2]
data2 = [1, 2, 8, 9]
print (find_indexes (data1, data2))
# >> [3, 0, 2, 1]
``````

Can any one please confirm that the function `find_indexes` has quasi-linear temporal complexity?

Note that this is not a real problem and that I am not trying to improve this algorithm. I'm just trying to illustrate the simple operation of a quasi-linear algorithm.

## asymptotic – What is the (large) temporal complexity of this specific implementation of the web robot?

I've built a simple "web crawler" and I wanted to know what was the temporal complexity of the central "processing" logic.

Here is a diagram of the architecture:

https://github.com/integralist/go-web-crawler

Specifically, the part of the algorithm that interests me is the `Caterpillar` which:

1. sets the size of the work pool
2. inserts tasks into a channel
3. processes tasks simultaneously within the pool

In the robot code, we:

• accept a list of N elements
• each item in the list has a nested list of N items
• we look at each element and decide whether to treat it or not

Note the `analyzer` and `Cartographer` parts of the code all have the same underlying design but How the & # 39; task & # 39; is treated is slightly different and so although I can imagine the time complexity of those who might be different depending on the procedure to follow, the principle is always the same: we continue to go through all the elements and decide on something to do .

What is this temporal complexity BigO?

At first, it might seem like it's just `O (1)` because we visit each item in the list as well as each item in the nested list.

Is that it? or am I missing something altogether obvious?

I do not think it's `O (n Log n)` because this does not reduce the number of loopback operations in nested lists. The same applies to `O (n * n)` as the nested loop does not necessarily have the same length as the parent list. I do not think it's either `O (2 n)` as nested lists do not grow exponentially (they are just an unknown number of elements).

## temporal complexity – Relative efficiency of n tasks in 1 loop versus 1 task each in n loops?

Let's say I have 3 simple tasks, find the min, the max and the average of an array of numbers.

A modular approach would be to write a function for each, thus iterating the table three times. However, this seems unnecessary if all information can be collected in a single iteration.

I understand that both approaches take `3n` but I wonder if one approach is better than the other in the general case and why.

## Decidability of temporal complexity

Let $$t: mathbb {N} rightarrow mathbb {N}$$ to be a building function in time with $$t (n) geq n + 100$$. Show that there is no MT $$T$$ this being given the gödel number of another TM $$M$$, decides if M is limited in time by $$t$$. Can someone help me with that? I have no idea

## Understand the dynamic deformation of the frequency (not the dynamic temporal strain)

I am developing a paper that requires dynamic frequency distortion as a component. They wrote very briefly about this algorithm and quote
this article: Transformation of the voice using the PSOLA technique (page 9/13 of the PDF document, section 3.3). I read it but I can not understand its mechanics. I've also done some research on Google, but I have almost no improvement. As I understand it, DFW takes an input of 2 spectrums (A and B) and then calculates a deformation matrix converting A-> B. Can someone give me a clearer intuition?

## Turing Machines – Spatial and Temporal Complexity of \$ L = {a ^ nb ^ {n ^ 2} mid n≥1 } \$

Consider the following language:
$$L = {a ^ nb ^ {n ^ 2} mid n≥1 } ,$$

When it comes to determining the time and space complexity of a multi-band TM, we can use two memory bands, the first one to count $$n$$and the second to repeat $$n$$ times the number of $$n$$. So, because of the way we use the second band, it should have a $$Theta (n ^ 2)$$ complexity of space, and I would say the same thing about time. I thought it was ok, but the solution is $$TM (x) = | x | + n + 2$$, or, $$x$$ is, supposedly, the length of the string, from where $$Theta (| x |)$$. That sounds right to me, so is my reasoning completely wrong or is it just a different way to express it?

Could we have reasoned differently, and say, for example, for every $$a$$ we write a symbol on the first tape and then count the $$b$$, swiping the symbols back and forth $$n$$ time? This time, the complexity of the space should just be $$Theta (n)$$, while the temporal complexity should remain unchanged. What would change if we had an MT on a single tape?

## algorithms – Rank of points in the temporal complexity of the 2D plane?

I read about the search algorithm ranks all points in a 2D plane, I do not understand the corresponding time complexity formula. It has four steps:

1. Calculate the median of the x coordinates of any point and divide the plane into two left and right halves.
2. Recursively do 1. then when there is only 1 point, rank (this point) = 0.
3. Sort the points by y coordinate separately, left and right.
4. Update right.

I understand the idea of ​​these steps and 3. has a complexity $$O (n log n)$$but the formula of temporal complexity in my book is

$$T (n) = 2T (n / 2) + Theta (n),$$

why the last term is not $$Theta (n lg n)$$? This is my current idea is that the $$T (n) = Theta (n lg ^ 2n)$$, applying the theorem of the master.

## Theory of complexity – Without using the temporal hierarchy theorem, is there another way to prove P / = EXP?

Some of your past responses have not been well received and you may be stuck.

Please pay close attention to the following tips:

• Please make sure to respond to the question. Provide details and share your research!

But to avoid

• Make statements based on opinions; save them with references or personal experience.

## Resources more complicated than temporal and spatial complexity

The wikipedia page on computing resources indicates that there are many different resources that have been defined:

In computer complexity theory, an IT resource is a resource used by some computer models in solving computer problems.

The simplest calculation resources are the calculation time, the number of steps needed to solve a problem, the memory space, the amount of memory needed to solve the problem, but many more complex resources have been defined.[citation needed]

Unfortunately, this claim does not have a quote.

What are these more complex resources that have been defined?

• Can I imagine something like the complexity of parallel processing?