recursion – Time complexity of a recursive algorithm with two lists as parameters

The goal is to find the function T which describes the time complexity of an algorithm who merges two lists (but the lists are given inversely sorted). The problem is that recursive calls depend on an external factor other than the input size, so , how may one construct a piecewise recursive function T on such an algorithm? How can a piecewise function T(n1,n2) (n1,n2 are the lengths of the lists) be defined when recursive calls do not depend on n1 or n2?

process scheduling – Average time for jobs in a batch processing system

I saw this question , I want to know if anyone can help me solve it.

Four jobs arrive at a batch processing system at the same time and the excution for each job is 2 hours. They run on a a single processor. The average time for the jobs to finish is

  • 2.5 hours
  • 1 hour
  • 5 hours
  • 8 hours
  • 4 hours

migration – how to migrate dates of date time format to drupal 8 date only from drupal 7

in my drupal 7 site the date is formatted in the database as 2020-08-10 00:00:00
when i do the migration without any formatting (ie field_blog_post_date: field_blog_post_date, the date is migrated into the d8 table with the same exact formatting, however it does not show in the view of the content .
I tried to use a format date plugin as below, but it get the error DateTime::createFromFormat() expects parameter 2 to be string, array (warning)
given DateTimePlus.php:251

 field_blog_post_date: 
    plugin: format_date
    source: field_blog_post_date
    from_format: 'Y-m-d H:i:s'
    to_format: 'Y-m-d'

transactions – Sequence valid before time

I’m studying sequence, and I set a transaction valid after 512 seconds.
First of all I use regtest and I start from clean blockchain, after that I mine 114 blocks.
At this point miner creates a transaction and tries to send it

My decode transaction

{
  "txid": "59ff4adafb47a5b22c6434af38f6e138c9008356778ed8b308c48029d7d4032f",
  "hash": "1bef48f96d18f1c78021b7e4b0a7d5285f6f7d2cc8c7adc31671820e367c3d70",
  "version": 2,
  "size": 191,
  "vsize": 110,
  "weight": 437,
  "locktime": 0,
  "vin": (
    {
      "txid": "88fb1408675c774c36692f170c6122af49cab7a5bab336272f0e4d8c0ef8c89a",
      "vout": 0,
      "scriptSig": {
        "asm": "",
        "hex": ""
      },
      "txinwitness": (
        "3044022070b753c99e2b6d241fd8a4ebe26f53644ef3eea58fff743e51f7a69e1ee7a99602204cbe9a60fc25be24baa3e4613f27a3e51df4a285174e001b62de18e1da2e42fd01",
        "03494041191fd2b02579fd49877755e55d9d451a37c6d0bbe04aab8ef507a78b19"
      ),
      "sequence": 4194305
    }
  ),
  "vout": (
    {
      "value": 49.99100000,
      "n": 0,
      "scriptPubKey": {
        "asm": "0 18363770025baac1ebdb99a948eab2776d9568ae",
        "hex": "001418363770025baac1ebdb99a948eab2776d9568ae",
        "reqSigs": 1,
        "type": "witness_v0_keyhash",
        "addresses": (
          "bcrt1qrqmrwuqztw4vr67mnx55364jwake269w8hl5fe"
        )
      }
    }
  )
}

I Give an error when I try to use sendrawtransaction

error code: -26
error message:
non-BIP68-final (code 64)

And it’s correct because my UTXO (88fb1408675c774c36692f170c6122af49cab7a5bab336272f0e4d8c0ef8c89a) come from the block with height 2 and 113 confirmations, and its median time is 1590276467 (2020-05-24 01:27:47 CET/CEST) and My transaction is valid after 512 seconds (Date:2020-05-24 01:41:23 CET/CEST) and the best block has that value 2020-05-24 01:28:06 CET/CEST (it’s not useful)

Now, If I create another transaction with the TXID comes from block with height 1 and 114 confirmations it works, I can send it.
Below my transaction and details.

{
  "txid": "f98d3ef70ca2c9d797bf7fff2e96e07f5ba10a280186a2132c6a902eebcab31e",
  "hash": "5776ad750fe759411a1ed5aa10e759b33e52b2935555d90b997fa788c9e25405",
  "version": 2,
  "size": 191,
  "vsize": 110,
  "weight": 437,
  "locktime": 0,
  "vin": (
    {
      "txid": "e1ee4602a78ab4f5f58705a75405d2f223307989950e1cbe03a4518cf23b7914",
      "vout": 0,
      "scriptSig": {
        "asm": "",
        "hex": ""
      },
      "txinwitness": (
        "304402200641ef29e3f0c8ef0c55dbb4a86a752e3655dd0c928c4e2e6659d3beeaf3f3870220026821b9ba043894cb20c7d7a378eaae76f56400755bf412f8ab1524a9a238b301",
        "03494041191fd2b02579fd49877755e55d9d451a37c6d0bbe04aab8ef507a78b19"
      ),
      "sequence": 4194305
    }
  ),
  "vout": (
    {
      "value": 49.99100000,
      "n": 0,
      "scriptPubKey": {
        "asm": "0 18363770025baac1ebdb99a948eab2776d9568ae",
        "hex": "001418363770025baac1ebdb99a948eab2776d9568ae",
        "reqSigs": 1,
        "type": "witness_v0_keyhash",
        "addresses": (
          "bcrt1qrqmrwuqztw4vr67mnx55364jwake269w8hl5fe"
        )
      }
    }
  )
}

The median time of block 1 and 114 confirmations is 1590276486 (2020-05-24 01:28:06 CET/CEST) and my transaction should be valid after 512 seconds (2020-05-24 01:45:05 CET/CEST), and my best block median time is 2020-05-24 01:28:06 CET/CEST

Below, the details of block 2 with 114 confirmations

{
  "hash": "5f769f610f29057577611868a660b353bea06e51d94905f3dbf7fb93e60a3d30",
  "confirmations": 1,
  "strippedsize": 214,
  "size": 250,
  "weight": 892,
  "height": 114,
  "version": 536870912,
  "versionHex": "20000000",
  "merkleroot": "1699f1cebdde9f4da1211f948f2ec194f7820fcfbab80c93bb5ea486e6837be7",
  "tx": (
    "1699f1cebdde9f4da1211f948f2ec194f7820fcfbab80c93bb5ea486e6837be7"
  ),
  "time": 1590276487,
  "mediantime": 1590276486,
  "nonce": 0,
  "bits": "207fffff",
  "difficulty": 4.656542373906925e-10,
  "chainwork": "00000000000000000000000000000000000000000000000000000000000000e6",
  "nTx": 1,
  "previousblockhash": "36f373a49886a31eff05ff8de39722f589cff103bed47715dd697d14350539ce"
}

Now, I know the sequence (00000000010000000000000000000001) is checked on median time of UTXO’s block, But block 2 (with 114 confirmations) and block 3 (with 113 confirmations) are very similar and very close, and I don’t understand why with block height 2 I’m able to send the transaction.

performance tuning – Large difference in the time it takes to compute the transpose of a matrix

This is a simplified version of the program I’m working on. I’m working with some large vectors that I have to transpose at the end in order to get the result I’m looking for. I have two versions of the program, both giving the same result, but one taking way longer than the other:

n = 200.; 
m = 300000;
p = Table(RandomReal({-1, 1}), m);

(* First program *)

AbsoluteTiming(Q = Reap(Do(Sow(( {p, p}*{i/50, i^2/100}), 1);
  Sow(( {p, p}*{i/50, i^2/100}) // Transpose, 2);, {i, n}))((2));)
AbsoluteTiming(v11 = Q((1)); v21 = Q((2)) // Transpose;)

{1.79722, Null} (* Time it takes to compute the two vectors, Q((1)) and Q((2)) *)
{1.02865, Null} (* Mainly time it takes to transpose Q((2)), as setting v11=Q((1)) takes about 10^-6 seconds *)

(* Second program *)

v12 = v22 = Table(0, {i, n});


AbsoluteTiming(Do(v12((i)) = ({p, p}*{i/50, i^2/100});
v22((i)) = ({p, p}*{i/50, i^2/100}) // Transpose;, {i, n}))
AbsoluteTiming(vec1 = v22//Transpose;)

{1.78438, Null} (* Time it takes to compute the two vectors, v12 and v22 *)
{14.5686, Null} (* Time it takes to transpose v22 *)

As you can see, the computation time is the same in both programs, but there’s a huge difference when transposing the matrix at the end. When m is larger, the second program even crashes due to memory issues when trying to transpose the matrix, while the first one takes only a few seconds. When checking at the end, both vectors are identical:

v22 == Q((2))
vec1 == v21

True
True

How can there be such a huge difference in the time it takes transposing two identical matrices?

reference request – Continuous time Markov chains and invariance principle

This question may be elementary for experts

Let ${xi_n}_{n=1}^{infty}$ be an i.i.d random variables on a probability space $(Omega,mathcal{F},P)$. We assume that the mean of $xi_n$ is zero, and the variance is $1$. For $n in mathbb{N}$ and $t ge 0$, we set
begin{align*}
X_n&=sum_{k=1}^{n} xi_k,quad
Y_t=X_{(t)}+(t-(t))xi_{(t)+1}
end{align*}

Here, $(cdot)$ denotes the floor function. By the definition, $Y={Y_t}_{t ge 0}$ is the linear interpolation of the simple random walk ${X_n}_{n=1}^{infty}$.
We define $Z_t^{(n)}=Y_{nt}/sqrt{n}$, Then, each ${Z_{t}^{(n)}}_{t ge 0}$ induces a probability measure on $C((0,infty))$, the space of continuous functions on $(0,infty).$ We denote by $P_n$ the probability measure. Donsker’s invariance principle states that ${P_n}_{n=1}^{infty}$ converges to the Wiener measure.

My question

We write $n^{-1}mathbb{Z}={cdots,-2/n,-1/n,0,1/n,2/n,cdots}$. Let $S^{(n)}={S_t^{(n)}}_{t ge 0}$ be a symmetric (continuous time) Markov chain on $n^{-1}mathbb{Z}$. My question is the following:

Are there “Donsker’s invariance principles” for ${S^{(n)}}_{n=1}^{infty}$ (or scaled ${S^{(n)}}_{n=1}^{infty}$) ?

Because ${S^{(n)}}_{n=1}^{infty}$ are continuous time Markov chains, there is no interpolated process like $Y$.
Although the state space of each $S^{(n)}$ is $n^{-1}mathbb{Z}$, this should be regarded as a jump process on $mathbb{R}$. Then, each $S^{(n)}$ induces an probability measure on $D((0,infty))$, the space of right continuous functions on $(0,infty)$ with finite left limits. Wiener measures are also regarded as a probability measure on $D((0,infty))$.

Please let me know if you have any preceding results.

time complexity – Examples of higher order algorithms ($mathcal{O}(n^4)$ or larger)

In most computer science cirriculums, students only get to see algorithms that run in very lower time complexities. For example these generally are

  1. Constant time $mathcal{O}(1)$: Ex sum of first $n$ numbers
  2. Logarithmic time $mathcal{O}(log n)$: Ex binary searching a sorted list
  3. Linear time $mathcal{O}(n)$: Ex Searching an unsorted list
  4. LogLinear time $mathcal{O}(nlog n)$: Ex Merge Sort
  5. Quadratic time $mathcal{O}(n^2)$: Ex Bubble/Insertion/Selection Sort
  6. (Rarely) Cubic time $mathcal{O}(n^3)$: Ex Gaussian Elimination of a Matrix

However it can be shown that
$$
mathcal{O}(1)subset mathcal{O}(log n)subset ldots subset mathcal{O}(n^3)subset mathcal{O}(n^4)subsetmathcal{O}(n^5)subsetldotssubset mathcal{O}(n^k)subsetldots
$$

so it would be expected that there would be more well known problems that are in higher order time complexity classes, such as $mathcal{O}(n^8)$.

What are some examples of algorithms that fall into these classes $mathcal{O}(n^k)$ where $kgeq 4$?

partition – Time Machine on slow drive to be ported to new drive

I have an old multi-partitioned external HDD with one partition being used for Time-machine.

I want this partition with Time-machine moved to a new SSD.

How do I go about copying a “partition”? unix dd? or rsync (don’t believe this would work as hard links and sym links used by Time-machine?

8 – Changing exisitng date time field

I am working on an already existing site that has content.

There is a content type with field date and time. I need to remove the time component without deleting any content along the way. if the time data is lost, that is fine.

Can I simply delete the field and recreate it with date only? Or will that delete the existing data?

Can I achieve this with config manager?

performance – What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.