algorithm analysis – $Phi_1=1$ or $Phi_1=2$ for the dynamic $text{Table-Insert}$ , where $Phi_i$ is the potential function after $i$ th operation, as per CLRS

The following is the section from Introduction to Algorithms by Cormen. et. al. in the Dynamic Tables section.

In the following pseudocode, we assume that $T$ is an object representing the table. The field $table(T)$ contains a pointer to the block of storage representing the table. The field $num(T)$ contains the number of items in the table, and the field $size(T)$ is the total number of slots in the table. Initially, the table is empty: $num(T) = size(T) = 0$.

$text{Table-Insert(T,x)}$

$1quad text{if $size(T)=0$}$

$2quadquad text{then allocate $table(T)$ with $1$ slot}$

$3quadquad size(T) leftarrow 1$

$4quadtext{if } num(T) =size(T)$

$5quadquadtext{then allocate $new-table$ with $2 • size(T)$ slots}$

$6quadquadquadtext{insert all items in $table(T)$ into $new-table$}$

$7quadquadquadtext{free $table(T)$}$

$8quadquadquad table(T) leftarrow new-table$

$9quadquadquad size(T) leftarrow 2 • size(T)$

$10quad text{insert $x$ into $table(T)$}$

$11quad num(T) leftarrow num(T) + 1$

For the amortized analysis for the a sequence of $n$ $text{Table-Insert}$ the potential function which they choose is as follows,

$$Phi(T) = 2.num(T)-size(T)$$

To analyze the amortized cost of the $i$th $text{Table-Insert}$ operation, we let $num_i$ denote the number of items stored in the table after the $i$ th operation, $size_i$ denote the total size of the table after the $i$ th operation, and $Phi_i$ denote the potential after the $i$th operation.

Initially, we have $num_0 = 0, size_0 = 0$, and $Phi_0 = 0$.

If the $i$ th Table-Insert operation does not trigger an expansion, then we have $size_i = size_{i-i}$ and $num_i=num_{i-1}+1$, the amortized cost of the operation is $widehat{c_i}$ is the amortized cost and $c_i$ is the total cost.

$$widehat{c_i}=c_i+Phi_i- Phi_{i-1} = 3 text{ (details not shown)}$$

If the $i$ th operation does trigger an expansion, then we have $size_i = 2 . size_{i-1}$ and $size_{i-1} = num_{i-1} = num_i —1$,so again,

$$widehat{c_i}=c_i+Phi_i- Phi_{i-1} = 3 text{ (details not shown)}$$


Now the problem is that they do not make calculations for $widehat{c_1}$ the situation for the first insertion of an element in the table (line 1,2,3,10,11 of code only gets executed).

In that situation, the cost $c_1=1$, $Phi_0=0$ and $num_1=size_1=1 implies Phi_1 = 2.1-1 =1$

We see that $Phi_1=1 tag 1$

So, $$widehat{c_1}=c_1+Phi_1-Phi_0=2$$

But the text says that the amortized cost is $3$, (I feel they should have said the amortized cost is at most $3$, from what I can understand)

Moreover in the plot below,

Plot

The text represents graphically the $Phi_1=2$ which sort of contracts $(1)$, but as per the graph if we assume $Phi_1=2$ then $widehat{c_i}=3, forall i$

I do not quite get where I am making the fault.

encryption – Wrap key operation in Azure Key Vault – symmetric keys

Could anyone explain why the bolded part of the wrap key description?

Wraps a symmetric key using a specified key. The WRAP operation
supports encryption of a symmetric key using a key encryption key that
has previously been stored in an Azure Key Vault. The WRAP operation
is only strictly necessary for symmetric keys stored in Azure Key
Vault since protection with an asymmetric key can be performed using
the public portion of the key.
This operation is supported for
asymmetric keys as a convenience for callers that have a key-reference
but do not have access to the public key material. This operation
requires the keys/wrapKey permission.

AFAIK, all the keys in Azure Key Vault are stored at rest in HSM modules. Why is key wrapping necessary for symmetric keys? What does ‘protection’ mean in this case? Using a public key to encrypt data?

If HSM are securing all the keys in Key Vault (using its built-in symmetric key), then why would encrypting a symmetric key be necessary as quoted?

symbolic – How to define a function based on an operation with parameters

I would like to define a function as the output of another operation depending on a few parameters. For example

s[t1_,t2_]:= Integrate[t^n,{t,t1,t2}] 
s[0,1]

I only get back an echo s[0,1] rather than a function dependent on $a, n$. What is the correct format for the definition?

I tried the following.

s[n_,t1_,t2_]:= Integrate[t^n,{t,t1,t2}] 
s[2,0,1]

Edit:
I got a conditional expression after a long wait. How do I specify $n$ to be a real number for this integral?

linear algebra – Matrix operation to get a ‘reflected’ upper-triangular matrix from of an $n!times! n$ matrix

If I have an $n!times! n$ matrix $mathbf{A}$ with non-zero entries, is there a matrix operation (or a set of matrix operations) that results in the matrix $mathbf{A’}$ such that $A’_{ij} = A_{ij}$ if $(i!+!j) leq (n!+!1)$ and $A’_{ij}=0$ otherwise? For example, if $mathbf{A}$ is a $3!times! 3$ matrix given by
$$
mathbf{A} = left(begin{matrix} A_{11} & A_{12} & A_{13} \A_{21} & A_{22} & A_{23} \A_{31} & A_{32} & A_{33} end{matrix}right),
$$

are there matrix operations to get
$$
mathbf{A’} = left(begin{matrix} A_{11} & A_{12} & A_{13} \A_{21} & A_{22} & 0 \A_{31} & 0 & 0 end{matrix}right)?
$$

Apologies for my use of the phrase “reflected upper-triangular”, as I did not know how else to refer to this kind of a matrix.

nt.number theory – Maximum number by AND operation

Help me to solve this

You are given an array of N numbers say (A1,A2,⋯An). Let us define a function
F(x)=∑i=1 to n (Ai&x)
where & is bitwise AND operator. We needs to find the number of different values of x for which the value of F(x) is maximised. But there is a constraint for x that it must have exactly L bits-set in its binary representation. Further, print -1 if infinite number of X possible that are fulfilling the constraint

For example. if array is (3,5,7,1,4) and L=2 then answer is 2 because 5 and 6 are numbers with 2 set bits in their binary representation and gives maximum value after perform (A1&x)+(A2&x)+(A3&x)+⋯+(An&x). So the Values for x are 5 and 6 and answer is 2.

Another example is array is (3,5,7,1,4) and L=1 then answer is 1 because only x=4 satisfied the constraints.

javascript – Minimize sum of array by applying specific operation on x carefully chosen elements

First let me point out a few details.

Use const intead of let unless you are going to modify the value after initialization. You use it for the ofNumber variable, but there are more that deserve it.

But actually there’s often no need to define a variable at all if it is used only once. Similarily, storing a return value to a variable and immediately returning that variable is redundant, just return the value returned by the function directly.

function operation(max) {
    return Math.floor(max / 10);
}

But you could also stay consistent with the ofNumber callback like this:

const operation = (max) => Math.floor(max / 10);

Another thing is that we usualy use for loop in theese cases.

for (let ops = x; ops > 0; --ops) {...}

Now, let’s analyse the big-O time complexity of your algorithm.
Adding a comment above each statement in your code. n=nums.length.

function minSum(nums, x) {
    // O(1)
    if (nums.length === 0) {
        return false;
    }
    function operation(max) {
        // O(1)
        let redcuedMax = Math.floor(max / 10);
        return redcuedMax
    }
    let ops = x;
    // O(x * inner)
    while (ops > 0) {
        // O(n)
        let max = Math.max(...nums);
        // O(1)
        const ofNumber = (element) => element >= max ;
        // O(n)
        let maxIndex = nums.findIndex(ofNumber)
        // O(1)
        let operated = operation(max);
        // O(1)
        nums(maxIndex) = operated;
        // O(1)
        ops--
    }
    // O(n)
    return nums.reduce((prev,next) => prev + next, 0)
}

That makes for O(x * n) in time. O(1) in space of course, because you are never making any copies of the array.

How can we optimize this?

Well first thing i see is that there are 2 O(n) operations in the loop body.
Maybe we can find the maximum’s element index in O(n).And if we do, we can access the maximum element in O(1). This optimization will be less efective if the input is sorted or almost sorted in descendant order, because the second O(n) operation is basically O(1) for such sorted input.

Another thing is that after the loop, there is another O(n) operation. Maybe we can keep track of the sum (updating it in O(1) time) ever since the first time we needed to scan the entire array. Although, this optimization is x times less significant, for small x it may help.

Of course, the most significant improvement can only arise from changing the entire algorithm’s big-O complexity from O(x * n) to something with slower rate of change. Even if it costs us increased memory complexity to say O(n).

To do that we have to leave the code for now and let’s think about the problem itself.

You wrote:

As a result, you have to perform the operation on the elements with the highest values first.

Good. But is there more? How many of the highest elements will you actually need?

At most x, right? Either the highest element divided by 10 remains the highest element, in which case you continue with that one, or the next highest element will become the current highest. So maybe we dont want to track just 1 highest element, but x of them. This may raise our memory complexity to O(min(x,n)), but that would be still a good tradeoff.

Well, and I think I will break off at this point. I don’t wanna write it for you. I hope I gave you enough hints to come up with a faster solution on your own. Just one more thing to say, don’t be afraid to use your own specialized loops in such optimizations even if it means your code will grow. It’s always trade-offs. Time, space, readability/code size, … you improve one, you loose on the other… well sometimes not if you got it very wrong on the first shot 😀 (not saying this is the case:)).

nist – How to deside what model should be picket for security operation center, design and implimentation?

To pick the right model for design and implement a Security Operation Center, it should pick a most suitable model that is for the business, that could be capable to be tailored.

What are the differences between, Best practices, standards, or frameworks in SOC design?

summation – Donald Knuth and change of variable operation on sum

I am reading the very first Donald Knuth book on Algorithms, chapter 1.2.3. “Sum and Products” (p.30).

Donald Knuth introduces operations on sum. But I can’t fully understand the 1 example that utilizes two operations:

  • 2. change of variable (particularly confused by)
  • 4. manipulating the domain

An example of change of variable that fully understand the explanation:

$$sumlimits_{R(i)}a_i = sumlimits_{R(j)}a_j = sumlimits_{R(p(j))}a_{p(j)}$$

However, I can’t understand the very first example given in the book: how did we come up with $$2j$$ and how this stores all the variables by changing the variable?
The steps are as follows:

$$sumlimits_{0le jle n}a_j=sumlimits_{0le jle n,j,even}a_j + sumlimits_{0<=j<=n ,j,odd}a_j$$ and then $$sumlimits_{0<=2j<=n j,even}a_{2j} + sumlimits_{0<=2j+1<=n,j,odd}a_{2j+1}$$
$$=sumlimits_{0<=j<=n/2,j,even}a_{2j} + sumlimits_{0<=j<=n/2 2j+1,j,odd}a_{2j+1}$$

reference request – Algebraic geometry additionally equipped with field automorphism operation. Ideal Membership?

I am looking for some facts on theory, which is essentially algebraic geometry but with field automorphisms added as ‘basic’ operations. (Precisely, I mean universal algebraic geometry for (universal) algebra being a field $mathbb{F}$ and operations being polynomials and field automorphisms) I am also interested in computational aspects.

I am mainly interested in case of field of multivariate rational functions, i.e. $mathbb{F} = K(x,y)$.
An algebraic set in such setting could be $${(f,g) in (K(x,y))^2 mid x^2-y^2-f cdot g(x=frac{x+y}{2}, y = frac{x-y}{2}) = 0}.$$ and ideals are additionally closed under field automorphisms.

I would be grateful for reference to any source that considers theory like this, also the computational aspects.

What is a system operation and how do they relate to use cases?

This is in a Systems Analysis and Design context. I did search google and everything on the first 2 pages returned results for “operating systems”.