javascript – How to increase efficiency of function that decodes all values of a given object from Base64

I have function decodeVals in NodeJS that decodes all of the values of the given object from Base64 to ASCII. This is achieved by traversing the values of said object then converting each element one at a time via a function call to the “homemade” atob function.

Though the results are fast on objects I’ve tested – which are relatively small – I imagine this could be inefficient for much larger objects.


Does anyone know of a more efficient/less expensive way of achieving this?

const atob = (str) => Buffer.from(str, 'base64').toString();


// Decoder Function
function decodeVals(obj) {
  for (let (key, value) of Object.entries(obj)) {
    if (!Array.isArray(value)) {
      obj(key) = atob(value);
    } else {
      for (let arrayElement of value) {
        obj(key)(obj(key).indexOf(arrayElement)) = atob(arrayElement);
      }
    }
  }
  return obj;
}


const testObj = {  
  correct_answer: 'VHJlbnQgUmV6bm9y',      
  incorrect_answers: ('TWFyaWx5biBNYW5zb24=', 'Um9iaW4gRmluY2s=', 'Sm9zaCBIb21tZQ=='),
};

const res = decodeVals(testObj);

console.log(res);
/*
 {
  correct_answer: 'Trent Reznor',
  incorrect_answers: ('Marilyn Manson', 'Robin Finck', 'Josh Homme')
 }
*/

Database Structure for The Best Efficiency

When storing messages from a group chat would it make the MySql database faster to add a new entry(row) for every message or to hold all the messages in one row in a BLOB (or long splittable string) format?

proof of work – Appending txns to merkle tree on PoW efficiency, true hash rate

The current network hashrate is around 150 EH/s, which means you need to compute around 90,000,000,000,000,000,000,000 hashes for every block. This is an incredible amount of computation with energy costs comparable to smaller countries.

Creating a merkle tree from mempool transactions and validating them, on the other hand, can be done easily even on a low-end consumer laptop. Now, there are multiple miners and mining pools doing that, but it doesn’t change the overall picture. Proof of work easily dominates all other computation done in the bitcoin network.

gpu mining – Power Efficiency of rate limited NVidia cards

NVidia is rate limiting some cards to 50% of possible hashrate. What effect does this have on power efficiency?

I can imagine it reducing power efficiency by cutting (same power, but lower hashrate). On the other hand I could imagine it increase power efficiency (but cutting voltage when it detect hashing, halving performance but doubling power efficiency). I haven’t bought a card yet so I cannot test.

plotting – How can I plot (and compare) the efficiency in execute 2 functions?

I have two functions as follows,

Multifactorial(n_, k_) := Abs(Apply(Times, Range(-n, -1, k)))
For(i = 1, i < 11, i++, Print(N(Sum(1/Multifactorial(n, i), {n, 0, 150}), 20)))

and

ClosedFormRMFC(n_) := 1 + 1/n Exp(1/n) Sum(n^(k/n) Gamma(k/n, 0, 1/n), {k, n})
For(i = 1, i < 11, i++, Print(N(ClosedFormRMFC(i), 20)))

Both the functions have an identical output, they both output 10 numerical constants with a 20 decimal accuracy.

I am not sure how to compare the efficiency in executing both these functions.

The first function is an infinite series and as such needs arbitrarily large partial sums to get higher accuracy. While the second gives a closed form formula that just needs to be converted to a numerical form.

For higher values of accuracy I assume the second function must be computationally more efficient.

My question is how can I test this in Mathematica?
How do I determine which is more efficient asymptotically?
Is there some way to plot a graph showing the time taken in executing both functions for various levels of accuracy (in output)?

efficiency – Efficient method for finding 1’s in binary representations

Say I have a binary number of N bits, and I need to find every combination that have M 1’s. For example, if N = 3 and M = 1, then 100, 010, 001 are the allowed combinations. I have read that a more efficient way of finding these combinations for large N is by dividing the bits into two halves, and then combining combinations from each half such that the total number of 1’s is M. For example, if N = 3 and M = 2, and we divide the bits into lengths of two and one, the half with one only has the combinations 0 and 1, while the half with two have the combinations 00, 01, 10, 11. 0 can only be combined with 11, and 1 can only be combined with 01 and 10. Thus the only total combinations are 011, 101, and 110.

Why is this method more effficient?

python – AStar Implementation and Efficiency

This is my implementation of an AStar-like algorithm for maze solving.

A quick summary of the problem I am trying to solve with my algorithm might be: A simple binary maze is given to you to solve, you may only go in the standard cardinal directions: north, east, west and south. There is a twist, however: You may also break only one wall. Create a function solution(maze) which determines the minimal number of steps to reach the end defined at point (width-1, height-1) from the start point (0, 0).

class AStar:
    def __init__(self, maze):
        self.queue = ()
        self.visited = set()
        self.maze = maze
        self.end = len(self.maze)-1, len(self.maze(0))-1

    def forward(self):
        self.queue.append((0, 0, False, 0, 0, None))

        while self.queue:
            self.queue = sorted(self.queue, key=lambda x: x(-3)+x(-4))  #Might be suboptimal to sort every time...
            node = self.queue.pop(0)
            
            if node(0:2) == self.end:
                return node
            self.visited.add(node(0:2))

            new_nodes = self.rulebook(node)
            self.queue.extend(new_nodes)
    
    def rulebook(self, node):
        x, y, broken, cost, heuristic, _ = node
        new_nodes = ()

        #------RULES & ACTIONS-----#
        for direction in ((0, 1), (1, 0), (-1, 0), (0, -1)):
            x_dir, y_dir = direction
            x_pdir = x + x_dir
            y_pdir = y + y_dir
            distance = self.distance((x_pdir, y_pdir), self.end)
            if (x_pdir, y_pdir) not in self.visited:
                if (x_pdir,y_pdir) not in self.visited:
                    if (x_pdir < len(self.maze) and y_pdir < len(self.maze(0))):
                        if (x_pdir >= 0 and y_pdir >= 0):
                            if self.maze(x_pdir)(y_pdir) == 1:
                                if not broken:
                                    new_nodes.append((x_pdir, y_pdir, True, cost+1, 
                                        distance, node))
                            elif self.maze(x_pdir)(y_pdir) == 0:
                                new_nodes.append((x_pdir, y_pdir, False, cost+1, 
                                    distance, node))
        return new_nodes
    def distance(self, node, end):
        #Chose the Taxicab Metric as it is more of a fit for the problem, 
        #as you cannot go diagonally in the problem statement.
        x = node(0)
        y = node(1)
        end_x = end(0)
        end_y = end(1)
        return abs(x - end_x) + abs(y - end_y)
    def backward(self, node):
        steps = 0
        while node != None:
            steps += 1
            node = node(-1)
        return steps

def solution(maze):
    astar = AStar(maze)
    end_node = astar.forward()
    steps = astar.backward(end_node)
    return steps

optimization – In a language interpreted line by line – is optimizing similar lines of code within a module into functions better in terms of efficiency?

Python does not interpret line by line. The default Python implementation (CPython) compiles the entire module to bytecode and then runs it. However, the CPython implementation does not place an emphasis on optimizations. The interpreter will do exactly what you tell it to do, which means that small changes to your code can have a big performance effect. In particular, function or method calls are relatively “slow” in CPython.

But in absolute terms, it doesn’t matter for performance. You’re writing code for GUI automation. The interaction with the GUI will be far slower than calling a function or parsing some lines of code. This is a bit like being on a journey between two cities. You are proposing to take a shortcut that will save you a minute on this tour, when you’ll actually spend an hour waiting in an airport to board a flight.

So performance doesn’t really matter here. What does matter? Maintainability. Instead of copy and pasting the same code four times, it is usually better to clearly separate the things that stay the same from the things that differ between instances of this code. A function is the perfect tool to express that. Thus, while your alternative solution with a function might run 200 nanoseconds slower, it is the objectively better approach here.

In reality, writing maintainable code with good abstractions is good for performance. When your code is easy to understand, it is easier to understand how performance can be improved, and to implement those improvements. For example, if you find that there’s a faster alternative for the checkExists() method, you’ll now only have to update it in one place. Of course, most code is not performance-critical, so such optimizations are unlikely to have a noticeable effect.

real analysis – Statistical efficiency in context of neural networks?

According to https://en.wikipedia.org/wiki/Efficiency_(statistics), an estimator with statistical efficiency is one that “… needs fewer observations than a less efficient one to achieve a given performance.”

But I am reading Deep Learning by Aaron C. Courville, Ian J. Goodfellow, and Yoshua Bengio, in which they also use statistical efficiency. However, I do not see if they intend to mean the same thing with the use statistical efficiency. The example that I would like for you to look at is the following(it appears in the context of explaining the potential advantages of using convolutions in neural networks. $m$ is input size, $n$ is number of outputs, and $k$ is the kernel size):

In a convolutional neural net, each member of the kernel is used at
every position of the input (except perhaps some of the boundary
pixels, depending on the design decisions regarding the boundary). The
parameter sharing used by the convolution operation means that rather
than learning a separate set of parameters for every location, we
learn only one set. This does not affect the runtime of forward
propagation—it is still O(k × n)—but it does further reduce the
storage requirements of the model to k parameters. Recall that k is
usually several orders of magnitude less than m. Since m and n are
usually roughly the same size, k is practically insignificant compared
to m ×n. Convolution is thus dramatically more efficient than dense
matrix multiplication in terms of the memory requirements and
statistical efficiency.

I cannot see why using having less parameters to store and thus fewer operations to carry out in itself has anything to do with statistical efficiency as per the definition in the wiki link. Sadly, statistical efficiency is not defined in the book so must refer to some universal definition of sorts.

Can you share your thoughts on this topic, please?

c – efficiency difference of & vs == in if

I was reading some code and stumpbled over a weird if assignment:

#define compVal 1
uint someVal;
...
if (someVal & compVal)
...

so the code should only allow someVal to become 0 or 1 so I guess we could consider it to be boolean. The guy who wrote the code is farely famous to write incredible efficient code however I wonder whether that
is any better than

if (someVal == compVal)
...

or what probably would be the most efficient but in that case less well readable:

if(someVal)
...

any thoughts about that regarding resource efficiency, readability etc?

Thanks!