algorithms – Temporal complexity of the predecessor search for a dictionary implemented as a sorted array

I'm reading "The Algorithm Design Manual" by Steven Skiena. On page 73, he discusses the time complexity of the implementation $$Predecessor (D, k)$$ and $$Successor (D, k)$$ and suggests that it takes O (1) time.

If the data structure looks something like

``````((k0, x), (k1, x), ...)
``````

where the keys `k0` at `kn` are sorted, given `k`, I thought the `successor(D, k)` should first look in the sorted table `k` ( $$O (log n)$$ ) then recover the successor ( $$O (1)$$ Since the table is sorted, the overall time complexity should be $$O (log n)$$ instead of $$O (1)$$ as mentioned in the book. This should also apply for `predecessor(D, k)`.

For an unsorted array, the temporal complexity of the predecessor and successor remains the same. $$O (n)$$ since the search in the unsorted table also takes $$O (n)$$.

Did I misunderstand something?

temporal complexity – Logic of description with decision problems within NP

Is there a logic of description in which important decision problems (for example, absolute coherence or concept satisfiability) arise in NP with respect to their temporal complexity?

The well-documented family of $$mathcal {ALC}$$Software-based logic may not work because even for $$mathcal {ALC}$$, the problems are PSpace-complete. The $$mathcal {EL}$$ family since the problems are at least CoNP.

Some restrictions may work, though I have not found one so far.

bitcoin core – Inclusion of transactions in a rules block and temporal / technical aspect

I have a question regarding blocking and transactions included in the next block resolved.

Suppose theoretically that there is a newly propagated solved block.

And now there is:
tx0 and tx1 – not included in the propagated block with the time of the transaction prior to the resolved one.

Tx2 – which has spread 10 seconds after the previous block resolution

Tx3 – which has spread 60 seconds after the resolution of the previous block

Do miners take only Tx0 and tx1, produce hash and start exploiting (do they have hash already) or else (I guess that's not the case) do they add tx2 and tx3 that have been propagated in this window "10 minutes"?

Presumably, they would need to start all over again, but that would mean that when the minor receives information about the new block solved by another minor, it will stop working, put all the transaction form block that it was in. to solve, any pool transaction (tx0 and tx1 in my example), check which transaction is already in the resolved block takes while reminding that tx is doing the hash and is starting to solve. Is it correct?

Basically, my question is this: Does the following block only include transactions propagated BEFORE the current block resolution and the AFTER miners starting to search for the current block (in this "10 minute window" or other rules? I tried to find solution in docs but no success (if someone knew a source, I would appreciate it)

Best regards

Temporal Complexity – Average Case Analysis of Linear Search

Based on question 2.2 of the TCLS:

Consider the linear search again. How many elements of the entry
sequence should be checked on average, assuming that the element is
searched is also likely to be any element of the array? And in the worst case? What are the average times and the worst cases of linear search in $$theta$$-notation? Justify your answers.

I have a question regarding the average case.

"assuming that the element being
searched is also likely to be any element of the painting "
– What does that mean exactly? Does this mean that the probability that each one will happen is $$frac {1} {n}$$?

Or maybe the element is not at all in the table – is a case that probability the same as the rest (and then they are all $$frac {1} {n + 1}$$)?

In the average case, how many duplicates are there? Are we assuming that there are no duplicates?

A good explanation on what exactly is the average case would be helpful.

Thank you so much!!!

postgresql – How to increase the temporal performance of a QSL query on PostrgeSQL 9.6

I'm using postgresql 9.6 on a 16GB Ubuntu 4-core computer and SSD with the following settings:

``````max_connections = 200
shared_buffers = 6GB
work_mem = 256MB
maintenance_work_mem = 1GB
``````

How to increase the performance of the SQL query:

``````                WITH SS AS (SELECT
time_noise, track, slow distance, last_time
OF
eco.noise
OR
(track, time_noise, slow) IN (
TO SELECT
DISTINCT ON (track)
track, time_noise, slow
OF
eco.noise
OR
base_name = & # 39; B001 & # 39; AND slow> = 50 AND slow <= 90 AND distance <= 10000 AND time_noise >= "06 -01-2019" ET time_noise <= & # 39; 07 -01-2019 & # 39;
ORDER BY, slow DESC
)
ORDER BY time_noise)
TO SELECT
COALESCE (to_char (ss.time_noise, 'YYYY-MM-DD HH24: MI: SS'), 'AS' time_noise, ss.slow, ss.track, COALESCE (to_char (ss.last_time, YYYY-MM-DD HH24: MI: SS), as last_time, ss.distance,
eco.tracks.callsign, eco.tracks.altitude, eco.tracks.speed, eco.tracks.angle, eco.tracks.latitude, eco.tracks.longitude, eco.tracks.vertical_speed,
eco.routes.from, eco.routes.to
DE SS
JOIN LEFT eco.tracks ON ss.track = eco.tracks.track AND eco.tracks.time_track = ss.last_time
JOIN LEFT eco.routes ON eco.tracks.callsign = eco.routes.callsign
ORDER BY ss.time_noise ASC;
``````

I use the following indexes:

``````    postgres | eco | eco.noise | test_unique_noise_3 | f | f | btree | 3 8 26 2 | {base_name, slow, distance, time_noise} | f | F
postgres | eco | eco.tracks | tracks_time_track_track_key | t | f | btree | 2 3 | {time_track, track} | f | F
postgres | eco | eco.routes | pr_routes | t | t | btree | 1 | {indicative} | f | F
postgres | eco | eco.noise | test_unique_noise | f | f | btree | 25 2 8 | {track, time_noise, slow} | f | F
``````

EXPLAIN THE ANALYSIS:

``````Sort (cost = 3739384.97..3742913.97 rows = 1411600 width = 131) (real time = 2377.640..2377.805 rows = 3792 loops = 1)
Sort key: ss.time_noise
Sorting method: quicksort Memory: 1087kB
CTE ss
-> Sort (cost = 525472.51..529001.51 lines = 1411600 width = 32) (real time = 2161.522..2162.140 lines = 3792 loops = 1)
Sort key: noise.time_noise
Sorting method: quicksort Memory: 393kB
-> Nested loop (cost = 369723.59..381285.34 lines = 1411600 width = 32) (real time = 2026.531..2160.723 lines = 3792 loops = 1)
-> Unique (cost = 369723.02..370160.00 lines = 3978 width = 20) (real time = 2026.498..2142.201 lines = 3791 loops = 1)
-> Sort (cost = 369723.02..369941.51 lines = 87396 width = 20) (real time = 2026.497..2117.520 lines = 568059 loops = 1)
Sort key: noise_1.track, noise_1.slow DESC
Sorting method: quicksort Memory: 68956kB
-> Index Scan with the help of test_unique_noise_3 on noise noise_1 (cost = 0.56..362549.87 lines = 87396 width = 20) (real time = 0.039..1751.161 lines = 568059 loops = 1)
Cond Index: ((base_name = & # 39; B001 & # 39; :: text) AND (slow> = 50 & # 39; :: double precision) AND (slow & quot; 90 & # 39; :: double precision) AND (distance <= 10000) AND (time_noise >= & # 39; 2019-06-01 00: 00: 00 & # 39; :: timestamp without timezone) AND (time_noise <= '2019-07-01 00:00:00'::timestamp without time zone))
->  Index Scan using test_unique_noise on the noise (cost = 0.56..2.78 lines = 1 width = 32) (real time = 0.004..0.004 lines = 1 loops = 3791)
Cond Index: ((track = noise_1.track) AND (time_noise = noise_1.time_noise) AND (slow = noise_1.slow))
-> Left hash join (cost = 17792.76..3066196.28 rows = 1411600 width = 131) (real time = 2351.691..2376.080 rows = 3792 loops = 1)
Hash Cond: (tracks.callsign = routes.callsign)
-> Left join of nested loop (cost = 0.56..3021978.00 lines = 1411600 width = 67) (real time = 2161.540..2179.800 lines = 3792 loops = 1)
-> CTE Scan on SS (cost = 0.00..28232.00 lines = 1411600 width = 32) (real time = 2161.524..2162.558 lines = 3792 loops = 1)
-> Index Scan using tracks_time_track_track_key on the tracks (cost = 0.56..2.11 rows = 1 width = 51) (real time = 0.004..0.004 rows = 1 loop = 3792)
Cond Index: ((time_track = ss.last_time) AND (ss.track = track))
-> Hash (cost = 10017.64..10017.64 rows = 621964 width = 15) (real time = 189.279..189.279 rows = 610992 loops = 1)
Buckets: 1048576 Lots: 1 Memory Usage: 37585kB
-> Seq Scan on the roads (cost = 0.00..10017.64 lines = 621964 width = 15) (real time = 0.017..65.022 lines = 610992 loops = 1)
Planning time: 0.646 ms
Running time: 2380.094 ms
``````

As I understand the choke bottleneck is the CTE because of 'WITH', so I tried using a subquery .. but a similar result;

``````            TO SELECT
sub.time_noise AS time_noise, sub.slow, sub.track, sub.last_time as last_time, sub.temperature, sub.humadity, sub.presure, sub.wind, sub.distance,
eco.tracks.callsign, eco.tracks.altitude, eco.tracks.speed, eco.tracks.angle, eco.tracks.latitude, eco.tracks.longitude, eco.tracks.vertical_speed,
eco.routes.from, eco.routes.to
FROM (SELECT
time_noise, track, slow distance, last_time
OF
eco.noise
OR
(track, time_noise, slow) IN (
TO SELECT
DISTINCT ON (track)
track, time_noise, slow
OF
eco.noise
OR
base_name = & # 39; B001 & # 39; AND slow> = 50 AND slow <= 100 AND distance <= 10000 AND time_noise >= "01 -01-2019" ET time_noise <= & # 39; 07 -01-2019 & # 39;
ORDER BY, slow DESC
)
ORDER BY time_noise) as sub
JOIN LEFT eco.tracks ON sub.track = eco.tracks.track AND eco.tracks.time_track = sub.last_time
JOIN LEFT eco.routes ON eco.tracks.callsign = eco.routes.callsign
ORDER BY sub.time_noise ASC;
``````

EXPLAIN THE ANALYSIS:

``````Left join of nested loop (cost = 1066367.20..4077589.03 rows = 1411652 width = 83) (real time = 3943.936..3992.130 rows = 9262 loops = 1)
-> Sort (cost = 1066366.64..1069895.77 rows = 1411652 width = 48) (real time = 3943.917..3945.328 rows = 9262 loops = 1)
Sort key: noise.time_noise
Sorting method: quicksort Memory: 1108kB
-> Nested loop (cost = 907649.38..922173.77 lines = 1411652 width = 48) (actual duration = 3582.100..3941.579 lines = 9262 loops = 1)
-> Unique (cost = 907648.81..911048.44 lines = 3978 width = 20) (real time = 3582.059..3891.212 lines = 9253 loops = 1)
-> Sort (cost = 907648.81..909348.63 lines = 679925 width = 20) (real time = 3582.058..3826.172 lines = 1450072 loops = 1)
Sort key: noise_1.track, noise_1.slow DESC
Sorting method: quicksort Memory: 162439kB
-> Index Scan with the help of test_unique_noise_3 on noise noise_1 (cost = 0.56..841781.02 rows = 679925 width = 20) (real time = 0.043..2864.183 rows = 1450072 loops = 1)
Cond Index: ((base_name = & # 39; B001 & # 39; :: text) AND (slow> = 50 & # 39; :: double precision) AND (slow <= & # 39; 100 & # 39; :: double precision) AND (distance <= 10000) AND (time_noise >= & # 39; 2019-01-01 00: 00: 00 & # 39; :: timestamp without timezone) AND (time_noise <= '2019-07-01 00:00:00'::timestamp without time zone))
->  Index analysis using noise test_unique_noise (cost = 0.56..2.78 rows = 1 width = 48) (actual time = 0.005..0.005 rows = 1 loops = 9253)
Cond Index: ((track = noise_1.track) AND (time_noise = noise_1.time_noise) AND (slow = noise_1.slow))
-> Index Scan using tracks_time_track_track_key on tracks (cost = 0.56..2.11 rows = 1 width = 51) (real time = 0.005..0.005 rows = 1 loops = 9262)
Cond Index: ((time_track = noise.last_time) AND (noise.track = track))
Planning time: 0.782 ms
Running time: 3994.348 ms
``````

Should I use PL / pgpsql for example?

temporal complexity – how to find the i-th root of n whose rest is the smallest?

Given a number n, what is the fastest algorithm to express it in terms of base ^ exponent + rem such that rem is the smallest possible and that the base is limited from 2 to the maximum number that can be represented by an unsigned int?

I am currently using the bignum gmp library in C and my algorithm is as follows:

First, it finds the smallest base possible and the largest base possible according to the circumstances (binary searches are performed), then we know the range of i.

Then, for each possible i, we take the ith root of n and the corresponding rest. The smallest remainder is updated if the current remainder is lower.

The algorithm works, but it is far too slow (even using a library as powerful as gmp). For comparison, for a number of about 6 megabits, expressed in binary, the algorithm takes about 90 days.

Intuitively, I think there is a better way to solve the problem (that is, a faster way, an algorithm with a smaller temporal complexity). What can be done to speed up this process?

Obs: Problem: Let n, find i such that the remainder, n – (floor (ithRootOf (n)) ^ i, be the smallest possible 2 <= floor (ithRootOf (n)) <= 2147483647

What is the temporal complexity of the code below?

Let $$V (0) = 12$$ and $$V (j)$$ the value of the variable $$i$$ right after $$j$$iteration for $$j 1$$.

begin {align} V (j) & = frac {7} {3} cdot V (j-1) ^ 5 \ & = left ( frac {7} {3} right) ^ {1 + 5} cdot V (j-2) ^ {25} \ & = left ( frac {7} {3} right) ^ {1 + 5 + 25} cdot V (j-3) ^ {125} \ & = points \ & = left ( frac {7} {3} right) ^ { sum_ {k = 1} ^ {j-1} 5 ^ k} + V (0) ^ {5 ^ j} \ & = left ( frac {7} {3} right) ^ { frac {5 ^ j – 1} {4}} + 12 ^ {5 ^ j} end {align}
(I waved a little hand when the dots are used .If you want a formal proof, you can use the induction $$j$$).

To stop your algorithm, you need that $$V (j) ge n$$. When does this happen?

Note that $$V (j)$$ is increasing monotonously.
Then you need more than $$s = lfloor log_ {5} log_ {12} n / 2 rfloor$$ iterations since:
$$V (s) = left ( frac {7} {3} right) ^ { frac {5 ^ s} {4}} + 12 ^ {5 ^ s} <2 cdot 12 ^ {5 ^ s} n,$$
and at most $$S = lceil log_ {5} log_ {12} n$$ iterations since:
$$V (S) = left ( frac {7} {3} right) ^ { frac {5 ^ S} {4}} + 12 ^ {5 ^ S}> 12 ^ {5 ^ S} age n.$$

Each iteration requires a constant time (assuming that each arithmetic operation can be performed $$O (1)$$ time, which may or may not be reasonable if you are dealing with great values). Therefore, the temporal complexity is $$Theta ( log log n)$$.

pathfinder – How does the revelation of the temporal celerity of the oracle work while there would be no other round of surprise?

An oracle with the mystery of time can choose the revelation of temporal celerity:

Temporal celerity (Su): Whenever you search for the initiative, you can run twice and take one or the other result. At the 7th level, you can still play in the surprise round, but if you do not notice the ambush, you act last, regardless of the outcome of your initiative (you act in the normal sequence of rounds following). At the 11th level, you can launch the initiative three times and take any of the results.

What happens if the party enters a fight when there is no surprise turn? Does this effect trigger or not?

What is the temporal complexity of image feature extraction algorithms, including HS, HOG, MSER and SIFT?

Can you help me by writing a time complexity of algorithms for extracting known image features. I am particularly interested in the detection of Harris-Stephens (HS) angles, extremely stable extreme regions (MSER), histogram-oriented gradient (HOG) and scale-invariant function transformation (SIFT). I've tried to find them in books and online, but I have not found them yet.

research – Karp-Rabin – what is the contribution for the temporal complexity of the worst case?

I'm trying to determine Karb-Rabin's entry in the worst case, regardless of the hash function used. However, I see both answers on the Internet:

• String "AAAAAAAA" and pattern "AAA"
• String "AAAAAAAB" and pattern "AAB"

Which of these entries would have the worst time in Karp-Rabin? Thank you!