## computational geometry – Efficient Data Structure for Closest Euclidean Distance

The question is inspired by the following UVa problem: https://onlinejudge.org/index.php?option=onlinejudge&Itemid=99999999&category=18&page=show_problem&problem=1628.

A network of autonomous, battery-powered, data acquisition stations has been installed to monitor the climate in the region of Amazon. An order-dispatch station can initiate transmission of instructions to the control stations so that they change their current parameters. To avoid overloading the battery, each station (including the order-dispatch station) can only transmit to two other stations. The destinataries of a station are the two closest stations. In case of draw, the first criterion is to chose the westernmost (leftmost on the map), and the second criterion is to chose the southernmost (lowest on the map).
You are commissioned by Amazon State Government to write a program that decides if, given the localization of each station, messages can reach all stations.

The naive algorithm of course would build a graph with stations as vertices and calculate the edges from a given vertex by searching through all other vertices for the closest two. Then, we could simply run DFS/BFS. Of course, this takes $$O(V^2)$$ time to construct the graph (which does pass the test cases). My question, though, is if we can build the graph any faster with an appropriate data structure. Specifically, given an arbitrary query point $$p$$ and a given set of points $$S$$, can we organize the points in $$S$$ in such a way that we can quickly find the two closest points in $$S$$ to $$p$$ (say, in $$log V$$ time?).

## abstract algebra – What is the structure or metric of this space? Patchwork

Imagine a “patchwork”. Pieces of the patchwork have colors (red, orange, yellow, green… and shades) and some other properties.

Green piece A and blue piece B may have similar properties, but still be far away from each other on the patchwork.

But if it is the case, then there must be another green shade piece close to the blue piece B OR (vice versa) another blue shade piece close to the green piece A… OR maybe another two pieces of shades green and blue close together

enter image description here

So there’s two distance metrics (distance on the patchwork and distance in the color space) and they are somehow entwined

Do you know any spaces with similar structure/metric?

Can you describe that as a category? for example:

If obj. A has similar properties to obj. B — there’s a morphism from A to B (morphism f)

If obj. C are close to obj. B — there’s a morphism from B to C (morphism g)

If obj. A has a color similar to obj. C’s color — there’s a morphism from A to C (but that’s also the composition of two previous morphisms! voila!)

May it have something to do with kernel methods?

## algorithms – How to calculate time complexity of KD-tree data structure

I have made a KD-tree data structure for a project I’ve been working on. But I can’t seem to figure out the query complexity for it.

What I know: I know that KD-tree is using BST structure, so for a search for the single element it would be O(log(n)), but we know that for a KD-tree the longer down it goes down the tree, the more recursive calls to that sub-tree are made, because of the intersects method and that’s how it searches in a specified interval. I see a lot of different articles about the query complexity, but all of these are on balanced KD-trees, what about unbalanced trees?

My thoughts:
I know the search for a best-case in an unbalanced tree of a query is O(log(n)) since to be able to find 1 element is O(log(n)) on a BST, and therefore the same for a KD-tree where the interval is large enough for 1 element. But this is almost impossible.
I know that worst-case should be O(N) for an unbalanced BST since the structure would look like a linked list. And this could theoretically happen for an unbalanced KD-tree.

Question:
How would I go about finding the avg case for query search since both best-case and worst-case is not that likely to happen.
And how are people achieving O(k + sqrt(N)) query complexity for balanced KD-tree?

## performance tuning – Efficiently populate sparse matrix with band structure

I’m trying to efficiently populate elements of a very large (2^20 x 2^20) symmetric matrix with 1s – luckily the matrix is very sparse, <0.1% filling. Further, the matrix has a very well defined periodic banded structure, as shown here:

.

In reality, this matrix is the result of a series of KroneckerProducts of 2×2 matrices, which is what gives it that characteristic banded structure – I’m hoping to find a way to speed up the generation without using kronecker products, because even with sparse matrices, the computation can take several seconds or minutes depending on the final dimensionality.

My first question relates to creating this sparse matrix efficiently. I’ve toyed with lots of different ways of generating even simple bands for the sparse array. For simply populating on the diagonal, the quickest method clearly seems to be to use the {i_,i_} notation, as shown here:

dim = 15;

aa = SparseArray({i_, i_} -> 1, {2^dim, 2^dim}) // RepeatedTiming;
bb = SparseArray(Band({1, 1}) -> 1, {2^dim, 2^dim}) // RepeatedTiming;
cc = SparseArray(Table({ii, ii} -> 1, {ii, 2^dim}), {2^dim, 2^dim}) //RepeatedTiming;
dd = SparseArray(Normal(AssociationThread(Table({ii, ii}, {ii, 2^dim}) -> Table(1, {ii, 2^dim}))), {2^dim,2^dim}) // RepeatedTiming;

Column({aa((1)), bb((1)), cc((1)), dd((1))})

aa((2)) == bb((2)) == cc((2)) == dd((2))
0.000309
0.031
0.081
0.054

True

However, when we try to do off-diagonal entries, this gets much worse, presumably because the condition has to be continually checked:

dim = 15;

aa = SparseArray({i_, j_} /; j - i == 1 -> 1., {2^dim, 2^dim}) // RepeatedTiming;
bb = SparseArray(Band({1, 2}) -> 1, {2^dim, 2^dim}) // RepeatedTiming;
cc = SparseArray(Table({ii, ii + 1} -> 1, {ii, 2^dim - 1}), {2^dim, 2^dim}) // RepeatedTiming;
dd = SparseArray(Normal(AssociationThread(Table({ii, ii + 1}, {ii, 2^dim - 1}) -> Table(1, {ii, 2^dim - 1}))), {2^dim, 2^dim}) // RepeatedTiming;

Column({aa((1)), bb((1)), cc((1)), dd((1))})

aa((2)) == bb((2)) == cc((2)) == dd((2))
0.185
0.031
0.095
0.052

True

From those two examples then it seems like Band is our best choice, but Band is still painfully slow, especially when compared to the {i_,i_} for the diagonal. Further, this is more frustrating, because in MATLAB the same problem can be accomplished an order of magnitude faster (this took ~1.4 ms):

But the fact that the original {i_,i_} case for the diagonal was so fast suggests that there is a more efficient way to do this.

So then my first question is: given all of that, is there a more efficient way to populate the bands of our sparse matrix, so that the speed can at least rival the equivalent in MATLAB?

And my second question, a bit predicated on the first: with whatever method is the most efficient, what is the best way to generate the periodic banding structure present in the final matrix (see above). You can accomplish it with Band by manually inserting spaces with 0s, but doing so can’t be the most efficient way.

Finally, because of that period-2 banded structure of the final matrix, where each quadrant is a recursive block of ever smaller diagonal matrices with side length smaller by a factor of 2, maybe you could generate all the smaller diagonal blocks, and then just place them in the final matrix – I’m not sure how this would be accomplished however. Of course, remember that the matrix is symmetric, so I would think that would help with efficient generation because really just one triangle has to be generated and then flipped.

## git – First published Python module, how is my project/code structure?

So I’ve just finished my first Python module (and published on Github), with this little project I’d like to learn how to distribute my code so that other users can use it as a plug in for their own projects.

Specifically I’m looking for feedback in the following direction:

• Is the interface to the module designed correctly?
• At the beginning of the code I check for completeness of the input, is this the best way to handle errors? (It looks chunky)
• Is the repository set up correctly so that it is plug-and-play?
• In general, is this the best way to design a module or should I work with classes instead of funtions?

Any other feedback is also welcome 🙂

__main__.py:

from cutlist import getCutLists
import sys
import argparse

if __name__ == '__main__':
#Argument parser
text = "This program calculates the most optimal cutlist for beams and planks."
parser = argparse.ArgumentParser(description=text)
parser.add_argument("-i", "--input", help="custom location of input json file (e.g. 'localhost:8080/foo/bar.json'", default="")
parser.add_argument("-o", "--output", help="custom location of output folder (e.g. 'localhost:8080/foo' -> 'localhost:8080/foo/cutlist_result.json'", default="")
args = parser.parse_args()

#Kick-off
result = getCutLists(args.input, args.output)

#Exit function with VS Code workaround
try:
sys.exit(result)
except:
print(result)

cutlist.py:

import json
from operator import itemgetter
import copy
from pathlib import Path
import os

def getSolution(reqs, combs):
needs = copy.deepcopy(reqs)
res = ()
res.append(())
for comb in combs:
#As long as all items from comb(x) fulfill need
combNeed = True
while combNeed:
#Check if comb(x) provides more than need (fail fast)
for need in needs:
if comb(need('Length')) > need('Qty'):
combNeed = False
if not combNeed:
break

for need in needs:
need('Qty') -= comb(need('Length'))

#Append result
res(0).append(comb.copy())

#Calculate total price
for sol in res:
price = round(sum(x('Price') for x in sol),2)

res.append((price))

#Return result
return res

def getCutLists(inputstr = "", outputstr = ""):
if inputstr:
jsonlocation = inputstr
else:
jsonlocation = './input/input.json' #default input location
print(jsonlocation)
errstr = ""

#Get input
try:
with open(jsonlocation) as f:
except:
return(f"Err: {errstr}")

#Get variables from JSON object
try:
reqs = data('Required Lengths')
except:

try:
avail = data('Available base material')
except:

try:
cutwidth = data('Cut loss')
except:

if errstr:
return(f"Err: {errstr}")

#Test for required keys in array
try:
test = (x('Length') for x in reqs)
if min(test) <= 0:
errstr += f"Err: Required length ({min(test)}) must be bigger than 0."
except:

try:
test = (x('Qty') for x in reqs)
if min(test) <= 0:
errstr += f"Err: Required quantity ({min(test)}) must be bigger than 0."
except:

try:
test = (x('Length') for x in avail)
if min(test) <= 0:
errstr += f"Err: Available length ({min(test)}) must be bigger than 0."
except:

try:
test = (x('Price') for x in avail)
if min(test) < 0:
errstr += f"Err: Available price ({min(test)}) can't be negative."
except:

if errstr:
return(f"Err: {errstr}")

#Init other vars
listreq = (x('Length') for x in reqs)
listavail = (x('Length') for x in avail)
minreq = min(listreq)
res=()

#Error handling on passed inputs
if max(listreq) > max(listavail):
return(f"Err: Unable to process, required length of {max(listreq)} is bigger than longest available base material with length of {max(listavail)}.")

if cutwidth < 0:
return(f"Err: Cut width can't be negative")

#Make list of all available cut combinations
combs = ()
for plank in avail:
myplank = plank.copy()
for cut in reqs:
myplank(cut('Length')) = 0

#Increase first required plank length
myplank(reqs(0)('Length')) += 1

#Set other variables
myplank('Unitprice') = myplank('Price') / myplank('Length')

filling = True
while filling:
#Calculate rest length
myplank('Rest') = myplank('Length')
for i in reqs:
length = i('Length')
myplank('Rest') -= ((myplank(length) * length) + (myplank(length) * cutwidth))
myplank('Rest') += cutwidth

#Set rest of variables
myplank('Baseprice') = (myplank('Price')) / ((myplank('Length') - myplank('Rest')))
myplank('Optimal') = (myplank('Rest') <= minreq)

#Check if rest length is positive
if myplank('Rest') >= 0:
combs.append(myplank.copy())
myplank(reqs(0)('Length')) += 1
else:
for i in range(len(reqs)):
if myplank(reqs(i)('Length')) > 0:
myplank(reqs(i)('Length')) = 0
if i < len(reqs)-1:
myplank(reqs(i+1)('Length')) += 1
break
else:
filling = False

#Sort combinations descending by remaining length, get solution
combs = sorted(combs, key=lambda k: k('Rest'))
res.append(getSolution(reqs, combs))

#Sort combinations by getting biggest lengths first (largest to smallest), optimal pieces first, get solution
listreq = sorted(listreq, reverse=True)
listreq.insert(0,'Optimal')
for x in reversed(listreq):
combs.sort(key=itemgetter(x), reverse=True)
res.append(getSolution(reqs, combs))

#Sort combination by least effective price per unit, get solution
combs = sorted(combs, key=lambda k: k('Baseprice'))
res.append(getSolution(reqs, combs))

#Get cheapest option & make readable format
cheapest = min((x(1) for x in res))
for x in res:
if x(1) == cheapest:
sol = {}
sol('Required base material') = {}
sol('Cut list') = ()
i = 1
for plank in x(0):
if plank('Length') not in sol('Required base material'):
sol('Required base material')(plank('Length')) = 0
sol('Required base material')(plank('Length')) += 1
str = f"Plank {i}: Length {plank('Length')}, "
for req in reqs:
if plank(req('Length')) > 0: str += f"{plank(req('Length'))}x {req('Length')}, "
str += f"rest: {plank('Rest')}"
sol('Cut list').append(str)
i += 1

sol('Total price') = cheapest
break

#Get output location
if outputstr:
outputfile = outputstr
if outputfile(len(outputfile)-1) != "//":
outputfile += "//"
outputfile += "cutlist_result.json"
else:
outputfile = "./output/cutlist_result.json"

#Make directories
Path(os.path.dirname(outputfile)).mkdir(parents=True, exist_ok=True)

#Output to file
f = open(outputfile, "w")
json.dump(sol, f, indent=4)
f.close

return("Success")

## python – I want to one Goal to have different Dates and Checks, how should i structure the relationships?

This is the models.py file

from django.db import models

This are the choices i want to have

DISPLAY = (
('Done', 'Done'),
("Didn't do", "Didn't do"),
)

this is the Check model

class Check(models.Model):
Check=models.CharField( max_length=50, choices=DISPLAY, default='some')

def __str__(self):
return  str(self.Check)

this is the date model

class Date(models.Model):
Date=models.DateField(auto_now=False)

def __str__(self):
return  str(self.Date)

this is the Goals model

class Goals(models.Model):
Goal=models.CharField(max_length=50)

def __str__(self):
return  str(self.Goal)

I’m a newbie, how should i structure the relationships?

## Linear algebra – Computational complexity of calculating the trace of a matrix product under a certain structure

I have two problems with calculating a trace and some responses (possibly suboptimal). My question is about a potentially more efficient algorithm for everyone. (More interested in an answer to question 1.)

1. Let $$U, V$$ and $$F$$ to be three real matrices. The three matrices have a size $$d times r$$, with $$r ll d$$ (It is, $$U, V$$ and $$F$$ are “ great & # 39; & # 39;). I want to calculate $$mathrm {trace} (U V ^ top F F ^ top)$$. IT $$A = F ^ top U$$, $$B = V ^ top F$$ and the trace of $$AB$$ has complexity $$mathcal {O} (r ^ 2 d)$$. Is there a faster algorithm (taking into account $$r ll d$$)? Can we have $$mathcal {O} (r d)$$?

2. Let $$U, V$$ and $$M$$ to be three real matrices. $$U$$ and $$V$$ have the size $$d times r$$ (with $$r ll d$$), and $$M$$ is lower triangular (with positive elements in its diagonal) in size $$d times d$$. I want to calculate $$mathrm {trace} (U V ^ top M M ^ top)$$. The simple calculation algorithm $$A = M ^ top U$$, $$B = V ^ top M$$, then the trace of $$AB$$ has a complexity $$mathcal {O} (r d ^ 2)$$. Is there a faster algorithm (taking into account $$r ll d$$)?

If this question doesn't belong here, let me know! (If yes, where can I post it?)

Thank you!

## What data structure / algorithm does LinkedIn use to determine the degree of a potential connection?

I read something about bidirectional BFS in this Quora answer, but is there anyone who can describe more precisely how it works and how long it takes (complexity)?

## co.combinatorics – Do you recognize this structure configuration on a poset?

Configuration is we have a finished poset $$P$$, with a multiplicative rank function $$r_ {xy}: P times P rightarrow mathbb {N}$$, and a symmetrical pairing $$langle , rangle: P times P rightarrow mathbb {N}$$. Our poset has a unique minimal element $$hat {0}$$, and a maximum element distinguished $$1$$, But $$1$$ does not necessarily cover all elements of $$P$$. From this we get an associated automorphism $$L: mathbb {Q} (P) rightarrow mathbb {Q} (P)$$ given by $$L (y) = sum_ {x leq y} frac {r _ { hat {0} x}} {r _ { hat {0} 1}} mu_P (x, y) x.$$

Extension of our form to $$mathbb {Q} (P)$$ by linearity, we are interested in the functions $$f: P rightarrow mathbb {Z}$$ that satisfy the functional equation: $$f (x) = sum_ {y in P} f (y) langle x, L (y) rangle.$$

My question is, have you ever seen this bunch of structure in other posets? I was told it could look like Khazdan Lustzig-like recurrences, but I couldn't see how to relate that to the general Khazdan-Luztig-Stanley polynomial of a poset. If you've seen this kind of thing in a different context, it would also be very useful to hear, I don't know much about these things, and any references that show this configuration would be useful for me.

## magento2 – How to create a multi-selection tree structure for personalized data

I have just created a new type of document (MSDS = Material Safety Data Sheet) for my chemical company. The MSDS has a standard chapter structure, but not all sub-chapters are required in every document. Therefore, I want to add a multi-selection tree to select the required sub-chapters as in the category tree.

In my user interface form, I have already added a standard multiselection field, but I would prefer the tree view.

Christian