complexity theory – Is my reasoning wrong that $PSPACE$ should not equal $EXPTIME$?

It’s impossible for a problem to require exponential space without being exponential-time.

  1. Consider that if an $EXPSPACE~~complete$ problem can be solved in $2^n$ time. It will now fall into the class $EXPTIME$.
  2. Then $EXPSPACE~~complete$ problems are in $EXP$ if they can be solved in $2^n$ time. This means they can reduce into $EXP~~complete$ problems and vice versa.

To me, this should be easy to write a proof that $EXPTIME$ = $EXPSPACE$.

My intuition tells me that if $Exptime$ = $Expspace$; then $PSPACE$ != $EXPTIME$,

Because $PSPACE$ already is not equal to $EXPSPACE$.

Question

As an amateur, what would make this reasoning be wrong or right?

algorithm analysis – Analyzing space complexity of passing data to function by reference

I have some difficulties with understanding the space complexity of the following algorithm.
I’ve solved this problem subsets on leetcode. I understand why solutions’ space complexity would be O(N * 2^N), where N – the length of the initial vector. In all those cases all the subsets (vectors) are passed by value, so we contain every subset in the recursion stack. But i passed everything by reference. This is my code:

class Solution {
public:
vector<vector<int>> result;
void rec(vector<int>& nums, int &position, vector<int> &currentSubset) {
    if (position == nums.size()) {
        result.push_back(currentSubset);
        return;
    }
    
    currentSubset.push_back(nums(position));
    position++;
    rec(nums, position, currentSubset);
    currentSubset.pop_back();
    rec(nums, position, currentSubset);
    position--;
}

vector<vector<int>> subsets(vector<int>& nums) {
    vector <int> currentSubset;
    int position = 0;
    rec(nums, position, currentSubset);
    return result;
}
};

Would the space complexity be O(N)? As far as i know, passing by reference doesn’t allocate new memory, so every possible subset would be contained in the same vector, which was created before the recursion calls.

I would also appreciate, if you told me how to estimate the space complexity, when working with references in general. Those are the only cases, where i hesitate about the correctness of my reasonings.

Thank you.

algorithms – Runtime Complexity of finding a loop in an array

Having a hard time understanding the runtime complexity of the following algorithm:

public class Solution {
    public boolean circularArrayLoop(int() nums) {
        int n = nums.length;
        if(n < 2){
            return false;
        }
        for(int i = 0; i < n; i++){
            if(nums(i) == 0){
                continue;
            }
            int slow = i, fast = advance(nums, i);
            while(nums(slow) * nums(fast) > 0 && nums(advance(nums, fast)) * nums(slow) > 0){
                if(slow == fast){
                    //one element loop does not count
                    if(slow == advance(nums, slow)){
                        break;
                    }
                    return true;
                 }
                 slow = advance(nums, slow);
                 fast = advance(nums, (advance(nums,fast)));
            }
            
            //loop not found, set all the elements along the way to 0
            slow = i;
            int val = nums(i);
            while(nums(slow) * val > 0){
                int next = advance(nums, slow);
                nums(slow) = 0;
                slow = next;
            }
            
        }
        return false;
            
    }
    public int advance(int() nums, int i){
        int n = nums.length;
        return i + nums(i)  >= 0 ? (i+nums(i)) % n : n + ((i + nums(i)) %n);
    }
}

Been told that the complexity should be $O(n)$ because each node is visited at most four times by slow, by fast, be marking zero, be zero checking. Cannot quite agree, because even it is marked zero, the rest of the nodes still have to check whether it is zero or not. So you have to check it at least $n-1$ times.

What is the time complexity of sorting n words length wise and then alphabetically? Should we consider the length of the strings in the complexity?

Let’s assume I have a list of some words found in the English dictionary:
(“hat”, “assume”, “prepare”, “cat”, “ball”, “brave”, “help” …. )

I want to sort these words (which are n in number) in a way, such that they are ordered based on their length, but if 2 words have the same length, they are ordered alphabetically.

What is the time complexity of this sorting operation?

Would it be fair to say that the complexity is just O(nlogn) and not take into consideration the length of the strings? If the largest length is S, can the complexity also involve a factor of S?

procedural programming – Trouble using For to create multiple graphs of increasing complexity

I was doing some math research, and I was investigating certain functional graphs involving modular arithmetic. I was trying to create multiple graphs at once using the For function (f(x) has been altered for the sake of confidentiality):

For(n=1,n<10,n++,
f(x_):=Mod(x,n);
Graph(Table(x->f(x),{x,0,n-1}), VertexLabels ->"Name"))

but I don’t get any output when I run the code. However, whenever I was running this without For, it worked perfectly.

I’ve tried taking away Graph, but running the code with just Table doesn’t work either. I tried using Print while keeping the top 2 lines, and that worked. I believe the issue is something to do with looping Table, but I could be wrong. Can anyone help?

turing machines – What are the three points of view in Kolmogorov Complexity?

I was reviewing for my finals and find this question that I have totally no clue.

Compare the following to statements from three points of view:

  1. There exists a constant $c > 0$ such that for all palindromes $x in {0, 1}^*$ we have $K(x) leq lfloor x / 2 rfloor + c$.

  2. There exists a constant $c > 0$ such that for all $x in {0, 1}^*$ we have $K(overline{x}) leq K(x) + c$ where $overline{x}$ is the complement of $x$.

So what are the three points of view am I suppose to use and where should I start?

Computaional Complexity of Frobenius Norm

How can I calculate the computaional complexity of Frobenius norm of each column vector(M X 1) in a M X N matrix and finally sorting the norm values in descending order? To clarify I have N-column vectors in a matrix and I want to calculate the magnitude of each column and finally arranging in a descending order in an algorithm. So what will be the computational complexity of doing this task in a a Big-O notation?

time complexity – Is finding solution to a system of 2SAT equations seperated by OR (DNF form) in NP

I want to know if finding solution to a specific number of 2SAT equations sepearted by OR gate (DNF form as below) is in P or NP.

The equation has total n variables and each clause is a 2SAT equation in itself in a subset of variables from 1 to n.
Example:

F = ((x1 || x2 ) && ( !x2 || x3) && (x3 || x4)) || ((x2 || x5) && (x3 || x6) && (!x4 || !x6)) || ….

The equation F is a DNF of 2SAT equations, which has say m clauses. Is finding a solution to this in NP? If yes how?

Also, specifically I also want to know if finding False instances of equation F is in P or NP as well.

Complexity of finding choice of entries of matrices

Suppose I have a matrix with entries either $x$ or $y$, where the number of rows = number of columns = $n$. If I want to select/circle $n$ entries such that for each row, only exactly one is circled, for each column, also exactly one is circled, and such that all entries circled are only $x$ (if such a circling of entries exists), what complexity class does this belong to? Thanks!

complexity theory – Why minimum vertex cover problem is in NP

I am referring to the definition of the minimum vertex cover problem from the book Approximation Algorithms by Vijay V. Vazirani (page 23):

Is the size of the minimum vertex cover in $G$ at most $k$?

and right after this definition, the author states that this problem is in NP.

My question: What would be a yes certificate?

Indeed, our non-deterministic algorithm could guess a subset of vertices, denoted by $V’$, and we can verify if $V’$ is a vertex cover of some cardinality in polynomial time, but how could we possibly show that $V’$ is minimum in polynomial time?