Confused in calculating the shortest path using Dijkstra's algorithm

enter the description of the image here

I have to find the shortest way using Dijkstra's algorithm. After performing the calculations, are the following values ​​correct?

A = 0
B = 13
C = 8
D = 20
E = 14
F = 25
G = 31
H = 38
I = 46
J = 52

where does the mathematical proof of Early Finish Time work for the greedy algorithm of interval scheduling vs the early start time OR greedy by duration?

where does the mathematical proof of Early Finish Time work for the greedy algorithm of interval scheduling vs the early start time OR greedy by duration?

I tried to search the internet but not much

no code

no code

algorithm – Using HLL_COUNT.MERGE outside SQL

I can use the following query to generalize all the HLL sketches of the different accounts:

SELECT category, number (distinct city), HLL_COUNT.INIT (city) FROM `table`
GROUP BY CATEGORY

And I get something like this:

enter the description of the image here

Although I normally use the HLL_COUNT.merge (...) function to get the total number, for example:

select 'all - hll', HLL_COUNT.MERGE (x), null from (select category, account (separate city), HLL_COUNT.INIT (city) x in group `datadocs-163219.010ff92f6a62438aa47471f98fc9.inv` , by category) _

enter the description of the image here

For various reasons, I have to do the MERGE outside of SQL / BigQuery. Is there some kind of open source library / library where I could do something like this:

>>> hll_set
>>> {& # 39; CHAQMBgCIAuCBz8QFBgPIBQyN8hxlqEBvMMBnLMBgWnD5gTB3AH + ROgD / YMEpM8Jr70C6Q2LwwfZlQ3QMNu8AYDSBKf7AbOSqgE = & # 39 ;, & # 39; & # 39 ;, CHAQDhgCIAuCBxwQBxgPIBQyFP3PBMBtibMR3sgC77oViasKwfMF & # 39; CHAQJxgCIAuCBzIQEBgPIBQyKshxlqEBvMMBzfECh6gJxJABoNwF / rEGwf0PgYYFvOoFmzjJPZwg2y3nbw == & # 39 ;, & # 39; & CHAQBBgCIAuCBw4QAhgPIBQyBpSJAfapKA == # 39 ;, & # 39; & # 39 CHAQBRgCIAuCBxEQAxgPIBQyCbaJBfqsH57tBw == ;, & # 39; CHAQGBgCIAuCBykQDRgPIBQyId6SAtNvwJ0XgO8Ct / EFlvUOskG1E87ZA7 / OApwg2y3nbw == & # 39 ;, "CHAQZhgCIAuCB2MQIxgPIBQyW5SJAcqJAbzDAcvcAoIV2xSMFsTyA42IAYkl + WVJ / AHqdJxRlEGbywG / WNjoAqS9BP3CAuPrBNSFAfdDt + + YEoeIBr ICmIYF6CL / MaLNAqKdA8k9rxntBrPVrAE = & # 39 ; & # 39; CHAQEBgCIAuCByQQChgPIBQyHN6SAqjtArAJ / esCj9wSg + 8KiVKNygHrpgXIogU = & # 39; CHAQpgkYAiALggfZAhChARgPIBQyzwKPBMwRkAzxP + + wPogyqC8qJAeBo8BHsSOypAbAJriL MYYR / 1jnKqIyzR3wJIkI / QXkecNH7WCzQZgMuDvxFLh xkboA7QB12akDhu5E + + 4 + + + 3KgBjAZ4nxLBRMw0xRWvIPZYszt v1gnz2a0BZoF4wzQggHqOewsJeAxgguGErUCjGG3KuhKgUyfCtItkjOMZZwCpi3phgHlA wRknEhwiq1Os4slgmhELEWl1f1rgH + B6e4AdCtAdkE4R7fK / gihHSRFqipAbYY9BmqP5oBgqsBvhrvEKGRAcpj7XHEVaAUrY8BylLRDgWn1wGpT6IS6irPHewb / AbKHqgQj QPyAeU82zuSHpgQ04UBzwqkFIADiBD4X6ABjBihFsIy6wmovgHNKssPsQOvGcADrQOQevMQvxKMBtANizqbP7l21 + kB0UDxy92rVYCBMcD5HC

>>> hll_merge_method (hll_set)
>>> 193

Is it possible to do this in any way using a library outside BQ with the hash generated from it?

graphs – Optimization of the algorithm of De Boor & # 39; s

According to the De Boor algorithm, a basic B-Spline function can be evaluated using the formula:

$$ B_ {i, 0} =
left { begin {array} {ll}
1 & mbox {if} t_i the x <t_ {i + 1} \
0 & mbox {else}
end {array}
right.
$$

$$
B_ {i, p} = frac {x-t_i} {t_ {i + p} -t_i} B_ {i-1, p} (x) +
frac {t + {i + p + 1} – x} {t_ {i + p + 1} -t_ {i + 1}} B_ {i + 1, p-1} (x)
$$

where the function $ B $ is defined for $ n $ checkpoints for the degree curve $ d $. The domain $ t $ is divided into $ n + d + 1 $ points called nodes (in the node vector). To evaluate this, we can define a recursive function $ B (i, p) $.

B-Spline itself is represented by:
$ S (x) = sum {c_iB_ {i, p}} $.

To evaluate this, Wikipedia's algorithm tells us to take $ p + 1 $ checkpoints from $ c_ {k-p} $ at $ c_p $and then repeatedly take the weighted average of each consecutive pair, eventually reducing it to one point.


I find this algorithm very good for one or two evaluations; However, when we draw a curve, we take hundreds of points in the curve and connect them to make it smooth. The recursive formula still requires up to $ (p-1) + (p-2) + (p-3) … $ calculations, no? (To take the weighted averages)

In my research, however, we need to evaluate only one polynomial – because B-Spline is ultimately composed of $ p + d + 1 $ basic polynomials (as I will show).

Suppose we take a node vector $[0, .33, .67, 1]$ and checkpoints $[0, 1, 0]$ (diploma $ 1 $), we can then represent the basic polynomials in the form:

$$ c_0B_ {0,1} = 0, mbox {if} 0 leq x <.25, + 0, mbox {if} .25 leq x <.5 $$
$$ c_1B_ {1,1} = 4x-1, mbox {si} .25 leq x <.5, + , , – 4x + 3, mbox {si} .5 leq x <.75 $$
$$ c_2B_ {2,1} = 0, mbox {if} .5 leq x <0,75 + 0, mbox {if} .75 leq x <1 $$

Now we can flatten that they produce:
$$ S (x) = sum {c_i B_ {i, 1}} =
left { begin {array} {ll}
0 & mbox {if} 0 the x <.25 \
4x-1 & mbox {if} .25 the x <.5 \
-4x + 3 & mbox {if} .5 the x <.5 \
0 & mbox {if} .75 the x <1 \
end {array}
right. $$

Now, if we were to calculate $ S $ to no matter $ x $, we can directly deduce which polynomial to use and then calculate it $ d $ multiplications and $ d + 1 $ additions.

I've implemented this calculation explicitly using Polynomial objects in JavaScript. See https://cs-numerical-analysis.github.io/.

Source: https://github.com/cs-numerical-analysis/cs-numerical-analysis.github.io/blob/master/src/graphs/BSpline.js

I want to know why people do not use the algorithm that I have described. If you calculated the polynomial representation of B-Spline then flatten outside, it will be a one-time cost. Should not this one-time cost be offset by removing the unnecessary recursive average?

Bellman-Ford Algorithm with n-1 iterations

I am supposed to find the graph for the Bellman-Ford algorithm, where I have to use all the n-1 number of iterations.

Can I use this graph, where S is my initial node?

View post on imgur.com

Thank you.

sorting – Quicksort algorithm with pivot element as the median

I can not understand the concept –
Let's say that our partition algorithm always chooses the median as the pivot element, so what will be the temporal complexity of Quicksort when the entries are the following?

Case a) Table I / P is sorted or almost sorted (sorted in ascending order in ascending order)

Case b) Table I / P is sorted in reverse order.

Case c) all elements of the input array are identical / identical.

Case d) i / p is an unsorted arbitrary / random array.

How about split partition algorithms? Will they always be balanced like almost n / 2 and n / 2 in all cases?

Open Source database on the cloud to test the algorithm

I have developed an algorithm that would probably optimize the execution of queries and the accuracy of the results. I am looking for free or open source database systems to run this algorithm. Even a simulator or emulator would work. I ask to suggest some resources / websites / platforms available. In addition, which database would be suitable for testing the algorithm? Thank you.

algorithm – Optimization of a slow pixel count for the loop

I have therefore read the following questions:

But they both use libraries that I do not use (CPython) or are very specific in their approach and their requirement for optimization.

Here is my current simple loop:

image = Image.open ("basic.jpg")
imarr = np.array (image)

# Iteration on y / x first because of NP rank / col configuration
y = 0
for yrow in imarr:
for x in the following campaign:
pass
# use (x, y) for manipulation in the table and comparison
y + = 1

Using time (python of time c.py) – it takes about 6 seconds to exceed an image of 250,000 bytes. That's fine, but I think it could be a lot faster, and since it's a continuous-streaming image manipulation algorithm, the latency would be better if it's low.

Do you recommend resources and learning new concepts? Also, how can I optimize this?

Until there numpy knowledge table and pillow the knowledge has led me far, but not effectively.

algorithm – What is the best way to turn a 2D vector into a closest 8-direction compass direction?

The easiest way is probably to get the angle of the vector using atan2 (), as Tetrad suggests in the comments, then resize it and round it up, e.g. (Pseudocode):

// listed counterclockwise, starting from east = 0:
enum compassDir {
E = 0, NE = 1,
N = 2, NW = 3,
W = 4, SW = 5,
S = 6, SE = 7
};

// for string conversion, if you can not just do it, say dir.toString ():
const string[8] titles = {"E", "NE", "N", "NW", "W", "SW", "S", "SE"};

// current conversion code:
floating angle = atan2 (vector.y, vector.x);
int octant = round (8 * angle / (2 * PI) + 8)% 8;

compassDir dir = (compassDir) octant; // cast characters in enum: 0 -> E etc.
string dirStr = headers[octant];

the octant = round (8 * angle / (2 * PO) + 8)% 8 the line might need explanation. In just about every language I know who has it, the atan2 () function returns the angle in radians. By dividing it by 2π convert it from radians to fractions of a complete circle, and by multiplying it by 8, it converts it to eighths of a circle, which we then round to the nearest integer. Finally, we reduce the modulo 8 to take into account the looping, so that the values ​​0 and 8 are correctly mapped to the east.

The reason for the + 8, that I jumped higher, in some languages atan2 () can return negative results (that is to say –π to +π rather than 0 to 2π) and the modulo operator (%) can be set to return negative values ​​for negative arguments (or its behavior for negative arguments can be undefined). Add 8 (ie one full turn) at the input before reduction ensures that the arguments are always positive, without affecting the result in any other way.

If your language does not provide a practical rounding function to nearest, you can use a truncated integer conversion and simply add 0.5 to the argument, like this:

int octant = int (8 * angle / (2 * PI) + 8,5)% 8; // int () rounds

Note that, in some languages, the default conversion from integer to rounded rounds negative entries to zero rather than down, which is another reason to ensure that the input is always positive.

Of course, you can replace all occurrences of 8 on this line with another number (for example, 4 or 16, or even 6 or 12 if you are on a hex card) to divide the circle in as many directions. Simply adjust the enum / array accordingly.

Algorithm appropriate for a graph theory problem

So, I've recently encountered a graph theory problem and I'm unable to find a matching algorithm for the problem or to reformulate the problem so that it matches an existing algorithm.

The problem is quite simple: with a weighted graph, choose edges to maximize the sum of all the weights of the selected edges. A maximum edge can point to another vertex and no vertex can be the head of more than one edge.

Until now, this might seem to be a problem that can be solved with a matching algorithm, but there is an additional disadvantage: a vertex can not be the head of an edge that s & # 39; 39, it does not correspond to any of the edges of the given graph, or if it is a tail. for one of the chosen edges. In addition, the graph of selected edges must be acyclic.

A good analogy would be to imagine each vertex as a cell. I can mark all the vertices that are initially the tails of certain edges as cells containing objects. Choosing an edge would mean moving the object from one cell to another. This analogy seems perfect because:

  • A vertex can be the tail of a maximum of one edge (aka the object can only be moved to another cell)
  • A vertex can be the head of max an edge (alias only one object can be moved in a cell)
  • A vertex can not be the head of an edge unless it matches any tail, or if it is a tail for one of the edges selected. (aka the cell was either initially empty or the object could be transported to another cell, thus emptying the cell)

As good as it is, I did not find any algorithm that could be useful. Is pure bruteforcing for edge combinations as good as it is? Where can I get the edges in a more optimized way?