Dynamic Programming – Why is my iterative solution slower than a recursive solution?

I'm trying to get a solution to this question https://www.spoj.com/problems/COINS/.

But oddly, my iterative solution:

#understand 
using namespace std;

int main () {
int n;
while (cin >> n) {
long long int dp[n+2];
dp[0]= 0;
for (long long int i = 1; i <= n; i ++)
dp[i]= max (dp[i/2]+ dp[i/3]+ dp[i/4], I);
cost << dp[n] << endl;
}
returns 0;
}

gets a TLE, whereas the recursive solution for that (not mine) is accepted in no time:

#understand 
#understand 
#understand 

using namespace std;

map dp;

long long f (int n) {
if (n == 0) returns 0;

if (dp[n]! = 0) returns dp[n];

long at = f (n / 2) + f (n / 3) + f (n / 4);

if (to> n) dp[n]= to;
otherwise dp[n]= n;

return dp[n];
}

int main () {
int n;

while (scanf ("% d", & n) == 1) printf ("% lld  n", f (n));

returns 0;
}

Should not this be the opposite? I am really confused.
Please tell me how I can improve the first code.

c ++ – Calling a recursive function with the decrement operator

I wrote a code to solve a problem of factorial sum. When I created the recursive function for the factorial calculation, I used the decrement operator "-" followed by the number in the function call, which generated an error in the answer. Can any one explain the reason for using the operator to modify the answer? Follow the code below.

#include 
#include 
using namespace std;

long long factorial inte (long long int num) {
if (num == 1 || num == 0) returns 1;
return num * factorial (- num);
}

int main () {
long long int M, N;
while (scanf ("% lli% lli", & M, & N)! = EOF)
factorial (M) + factorial (N) << endl;
returns 0;
}

recursive algorithms – resolution of $ T (n) = 2T (n / 2) + $ 5 using the master theorem

I have some questions regarding the resolution of this recurrence relation T (n) = 2T (n / 2) + 5 with the help of the main theorem.

$$ T (n) = 2T (n / 2) + 5 $$

I managed to get a temporal complexity of Θ (n log n) as the answer using case 2 of the master theorem, but I am quite sure that it is incorrect, that I do not have a problem. have never met an entire single = f (n) and that I do not know how. to manage it, would like clarification on this matter.

data structures – output of the C ++ recursive function

primary called IsDoubleStringthen IsDoubleString called IsDoubleString. When IsDoubleString returns, he returns to IsDoubleString because IsDoubleString called IsDoubleString. In reality IsDoubleString will come back to IsDoubleString as many times as IsDoubleString called IsDoubleString… then IsDoubleString return to primary.


It's probably confusing. So let's talk about the stack of calls.

When you make a call, the program should go back to where it was called. Now, before I quote on me and say that is why you ask, please continue reading.

If the function A call the function B, then the function B call the function C… the running engine has to remember all of this for that C return to B and B return to A. This means that the runtime can not just store where to return in a variable … it needs some sort of data structure that grows.

What kind of data structure? Let's see … the first item to retrieve is the last item added. So, it's LIFO. The runtime needs a stack.


What happens when A call B, so what B call B. Well, first when A call B, the location in A to come back is pushed into the pile and then when B call B the location in B to return is pushed into the stack. This means that the stack now contains a position in A then a position in B.

When you come back, you have to go first, which is the position in B. So B return to B. And then he takes the position in A, so B return to A.

I talk a lot about calling conventions. I would be burned if I did not mention that it was an oversimplification of the convetions of call.


Ok, that could still be confusing. Let's try again …

  1. A call B, the position in A is pushed to the pile:

    Stack: {..., `A`}
    
  2. B call B, the position in B is pushed to the pile:

    Stack: {..., `A`,` B`}
    
  3. B returns, the position in B came out of the pile:

    Stack: {..., `A`}
    
  4. B comes back again, the position in A came out of the pile:

    Stack: {...}
    

Let's see what happens if B it's called twice:

  1. A call B, the position in A is pushed to the pile:

    Stack: {..., `A`}
    
  2. B call B, the position in B is pushed to the pile:

    Stack: {..., `A`,` B`}
    
  3. B call B, the position in B is pushed to the pile:

    Stack: {..., `A`,` B`, `B`}
    
  4. B returns, the position in B came out of the pile:

    Stack: {..., `A`,` B`}
    
  5. B returns, the position in B came out of the pile:

    Stack: {..., `A`}
    
  6. B comes back again, the position in A came out of the pile:

    Stack: {...}
    

We observe that when B it's called twice, B was pushed to the stack twice, and as a result, B returned to B two times.


For abstract, in a recursive function (which calls itself), it is logical that it comes back to itself. After all, when you make a call, the program must return to the place where it was called. Now you can quote me about it. Once he's back to himself as many times as he's called, he can then return to the original caller.

In this spirit,

primary called IsDoubleStringthen IsDoubleString called IsDoubleString. When IsDoubleString returns, he returns to IsDoubleString because IsDoubleString called IsDoubleString. In reality IsDoubleString will come back to IsDoubleString as many times as IsDoubleString called IsDoubleString… then IsDoubleString return to primary.

digital integration – How to integrate a recursive function according to the same variables?

I want to calculate an integral like this
$$
L_ {n} = int Tra (E) (E- mu) ^ {n} f (E) dE
\
$$

the f & # 39; (E) is only the derivative of the Fermi-Dirac distribution, nothing to fear.

My problem here is that: Tra (E) is obtained from a recursive method earlier in the code, and mathematica can not do it symbolically since it involves a large number of steps. As you may see, Tra (E) depends on the energy E, so I would vary it at the beginning$ {*} $, but also Tra (E) is within an integral that also depends on energy. How could I integrate that?

The following is the code after the recursive method.

Table[
{        
K = 8.61*10^-5;
T = 300;
q = 1.60217662*10^-19;
f0 = 1/(E^((Energy - [Micro]) / (K * T)) + 1);

Deriv = -D[f0, Energy];


L1 = N[Integrate[
     Tra*(Energia - [Micro]) * Variant, {Energy, 0, maximum value}],
5];

L0 = N[Integrate[Tra*Deriv, {Energy, 0, maxvalue}], 5];

S = (1 / (q * T) * L1 / L1);

[Micro], S

}
, {[Micro], -0.4, 0.4, 0.1}]

*: If I do that, it just becomes a constant in the integral.

computability – A language whose Turing machine does not stop for some positive cases but for others not recursive?

Say the language $ L $ is recursively enumerablebut not recursive.
Say $ a $ and $ b $ are symbols of the alphabet and $ w $ a word.
Let's say we have the following language:

$ L = {aw | w in L } cup {bw | w notin L } $

C & # 39; is, $ L & # 39; consists of the words that are in $ L $ with a $ a $ added at the beginning and words that are not in $ L $ with a $ b $ in the beginning.

is $ L & # 39; not recursive? If we have the Turing machine $ TM $ Who's deciding $ L & # 39;, $ TM $ will stop for some positive cases ($ w in L $) but for other positive cases ($ w notin L $) that will not stop. So is it not recursive and enumerable recursively?

From what I understood:

  • Recursively enumerable: the Turing machine will always stop if $ w in L $otherwise he may or may not stop.

  • recursive: it always stops.

  • Recursively enumerable, but not recursive: it stops only if $ w in L $; otherwise it will loop.

  • Not enumerable recursively: no Turing machine exists.

So, I do not know how to classify a language whose Turing machine stops for certain words.

Closed form for recursive sequence

Can any one show me how to get a closed form for this recursive sequence:

$ I_n = I_ {n-1} + (n-1) I_ {n-2} $

$ I_1 = $ 1 , $ I_2 = $ 2

This is the recursive sequence for counting the involution.

This sequence is A000085 $ on OEIS

unit – Provide a complete recursive method in an update?

OK, so I have a method that calls itself because it was the only way for me to get the behavior I needed. This is a clone of tetris that I make and this method allows to check the complete lines on the X axis.

                Cancel CheckForFullLine (int starting_Y)
{
for (int y = y_ start; y <frozenCells.GetLength (1); y ++)
{
for (int x = 0; x <frozenCells.GetLength (0); x ++)
{
if (! frozen[x, y].is full)
{
CheckForFullLine (begin_Y + 1);
return;
}

if (x == CELL_COUNT_X - 1)
{
tetrisLinesThisTick ++;
Debug.Log ("the lines of this check mark:" + tetrisLinesThisTick + "on the check mark:" + checkmark);
ClearLine (y);
}
}
}
}

but when I test the game, most of the time (more than 9/10 times) when you get multiple lines, the debug log indicates that we only have one line in this "tick".

Here is the update method where, at the top, I set tetrisLinesPerTick to 0 at the beginning of each image.

                Private update void ()
{
tick ++;
// reset the values ​​of tetris-lines this tick:
tetrisLinesThisTick = 0;

Clean screen ();
CheckForFullLine (0);
HandleMovementAndBlockCreation ();
DrawFallingPiece ();
DrawFrozenBricks ();
}

For example, if I receive two lines at once, the debug log displays the message "… linesPerTick 1" twice in a row.

Strangely, I sometimes get 2 or 3 lines and that indicates the correct amount, but it is rare and I do not know how I reproduce it for the moment.

Another very strange thing (the code in question might not be there for this part ..) … very rarely a frozen block will become gray after a fall, so I can still "use it" in the game for create more lines, but it does not disappear like the others, then do some extra lines using this gray block and it will disappear normally: S (if you want to help me with this part, it is more coded: P ):

                void ClearLine (int line)
{
// TODO: Add a score

// delete the complete line
for (int x = 0; x <frozenCells.GetLength (0); x ++)
{
frozen cells[x, line].isFilled = false;
}

// shifts all lines down // todo: determines if there is a multiline tetris to give better scores ...
for (int y = line; y <frozenCells.GetLength (1); y ++)
{
for (int x = 0; < frozenCells.GetLength(0); x++)
        {
            if (frozenCells[x, y].isFilled)
            {
                if (y > 0)
{
frozen cells[x, y].isFilled = false;
frozen cells[x, y - 1].isFilled = true;
}
}
}
}

}

For some reason, I have the feeling that it may be the recursive method I did and the fact that the value 'linesPerTick & # 39; either reset before the other lines are deleted (in the same move), could be completely wrong about this!

The entire project only includes a .cs file and a block texture that I created in GIMP. I could stick the whole script here if you wish to try it, thanks for letting me know. Long live any help!

algorithms – Analysis of basic recursive functions

I'm trying to understand the contrast between the run time for this function

static public String f (int N) {
if (N == 0) returns "";
String s = f (N / 2);
if (N% 2 == 0) returns s + s;
otherwise, return s + s + "x";
}

and this function

static public String f (int N) {
if (N == 0) returns "";
if (N == 1) returns "x";
returns f (N / 2) + f (N - N / 2);
}

where string concatenation takes time in proportion to the size of the strings.

Until now, I think the first function calls log (N) times for the input N and the second 2log (N). Is it correct? Beyond that, I'm not sure how to think about the number of operations performed in each of these calls. I know that for the first function, in the base case, there is 0 operation (no concatenation), then 1 operation (concatenation of two empty strings with a string of length 1?), Then 2 operations. In general, I believe that the string produced by a call with N is of length N? But I do not know where to start thinking about how it all adds up.

For the second too, I am a little lost. I just need a way to approach the analysis. Do not forget that I am not very good at symbols. Therefore, if you show with symbols, I would like an explanation that will help me to follow them as well.

algorithms – Number of function calls in recursive code

I am new to recursion. I'm asking some practical questions and wondering what is the technique for moving from a recursive code to identifying the number of function calls it makes.

win function (n)
if n ≤ 3, memo (n) = "yes"
if memo (n-2) = "no" or memo (n-3) = "no", leave memo (n) = "yes"
otherwise leave memo (n) = "no"
let win (n) = memo (n)

Basically, what is the number of steps this program uses to calculate $ mathit {win} (n) $? is $ mathrm {step} (n) $ linear in $ n $, polynomial in $ n $ or exponential in $ n $? Is it possible to use this code to compute $ mathit {winner} (1000) $ on a fast supercomputer? Justify your answer.