performance – Game of Life state calculation in javascript

I’ve created a Life implementation in javascript with the goal to be as fast as possible, with the rendering I’m satisfied (see picture bellow), hovever the next state calculation is really slow and I’m out of ideas how to speed it up even more.

Rendering

Screenshot of the game
Screenshot of the game
I can get 700FPS+ when rendering a total population of 6,986,628

I achieved this by using regl for rendering and moving the calculation of the visible cells to a separate thread (spawned a web worker dedicated for this). I think this doesn’t need any optimization, maybe the way I calculate the visible cells.

The way I calculate visible cells

onmessage = function (e) {
    var visibleCells = ();
    for (const x of e.data.grid.keys()) {
        if (x < -(e.data.offsets.X+1)) continue; //Continue until reaches the visible part
        if (x > -e.data.offsets.X+e.data.width) break; //Stop after leaving visible part 
        for (const y of e.data.grid.get(x).keys()) {
            if (y < e.data.offsets.Y-1) continue;
            if (y > e.data.offsets.Y+e.data.height) break;
            visibleCells.push((x, y));
        }
    }
    this.postMessage({result: visibleCells})
}

Representing the “universe”

I had some ideas on how to represent the Life universe but I stick with the last option as it turned out to be the best performing. (Note that this implementation does not restrict the space so it is an infinite grid)

1.1 Using 2D array as cellState = grid(x)(y);

Since we are dealing with an infinite grid this can’t be used

1.2 Using 2D array as grid((x,y),(x1,x2),…)

Storing only the living cell’s coordinate. This has the problem of possible duplicates. Also I ran some tests on jsbench.me and turned out that this is slower than the 2nd way (the next one).

2. Using an object

Setting an object’s properties to create the illusion of a 2D array. This somewhat worked, but had the problem of the overhead created by converting int to string and via versa, because object indexing uses strings as keys

//Defining grid
var grid = {};

//Creating a cell at (x;y)
if (grid(x) == undefined) grid(x) = {};
    grid(x)(y) = null;
    
//Killing a cell at (x;y)
delete grid(x)(y);
if (Object.keys(grid(x)).length == 0) delete grid(x);

3. Using Maps and Sets (current)

This way I can use integers as indexes and don’t have to deal with the possibility of a duplicate cell

//Defining grid
var grid = new Map();

//Creating a cell at (x;y)
if (!grid.has(x)) grid.set(x, new Set());
grid.get(x).add(y);
    
//Killing a cell at (x;y)
grid.get(x).delete(y);
if (grid.get(x).size == 0) grid.delete(x);

This is why I’m writing this question. I don’t know how to further improve performance here.
The code for calculating the next state

onmessage = function (e) {
    var newGrid = new Map();
    var sketch = new Map();
    var start = performance.now();
    for (var x of e.data.grid.keys()) {
        var col1 = x - 1, col3 = x + 1;
        if (!sketch.has(col1)) sketch.set(col1, new Set());
        if (!sketch.has(x)) sketch.set(x, new Set());
        if (!sketch.has(col3)) sketch.set(col3, new Set());
        for (var y of e.data.grid.get(x).keys()) {
            var row1 = y - 1, row3 = y + 1;
            sketch.get(col1).add(row1);
            sketch.get(col1).add(y);
            sketch.get(col1).add(row3);
            sketch.get(x).add(row1);
            sketch.get(x).add(row3);
            sketch.get(col3).add(row1);
            sketch.get(col3).add(y);
            sketch.get(col3).add(row3);
        }
    }

    for (var x of sketch.keys()) {
        for (var y of sketch.get(x).keys()) {
            //Count neighbours
            var c = 0;
            var col1 = x - 1, col3 = x + 1;
            var row1 = y - 1, row3 = y + 1;
            if (e.data.grid.has(col1)) {
                //1st col
                var col = e.data.grid.get(col1);
                c += col.has(row1)
                c += col.has(y)
                c += col.has(row3)
            }
            if (e.data.grid.has(x)) {
                //2nd col
                var col = e.data.grid.get(x);
                c += col.has(row1)
                c += col.has(row3)
            }
            if (e.data.grid.has(col3)) {
                //3rd col
                var col = e.data.grid.get(col3);
                c += col.has(row1)
                c += col.has(y)
                c += col.has(row3)
            }


            if (c == 3) { //If a cell has 3 neighbours it will live
                if (!newGrid.has(x)) newGrid.set(x, new Set());
                newGrid.get(x).add(y);
                continue;
            }
            //but if it has 2 neigbours it can only survive not born, so check if cell was alive
            if (c == 2 && (e.data.grid.has(x) && e.data.grid.get(x).has(y))) {
                if (!newGrid.has(x)) newGrid.set(x, new Set());
                newGrid.get(x).add(y);
            }
        }
    }

    postMessage({ result: newGrid, timeDelta: performance.now() - start });
}

When the worker recives the initial grid it creates two new grids: sketch this grid will contain potentional new cells (as of writing this I just noticed that I don’t add (x;y) to this grid just the neighbouring ones and it still works, I will look into this deeper after I finish writing), and newGrid which will contain the final result. This way I only loop throug the cells that maybe change state.

Current performance

+------------------------+-----------+--------+------+
| Population             | 6,986,628 | 64,691 | 3    |
+------------------------+-----------+--------+------+
| Iteration time (ms/i)  | 23925     | 212    | 0.16 |
+------------------------+-----------+--------+------+
| FPS (all cell visible) | 900+      | 70     | 60   |
+------------------------+-----------+--------+------+

Before you ask I don’t know why the fps greater if more cells are rendered, but if you know please write it down in a comment

Attemts to optimize

Split work to CPUcores-2 workers

This was unusable, one iteration took minutes to compute on a ~700K population. I think because the object is copied to each worker so the overhead was much larger than using only one worker.

sharepoint online – Leave Balance Calculation For Leave Application

I have a leave request application, in which user selects a leave type and then create a leave request and then a flow runs and sent to his/her Manager for Approvals.

But Now, I have to add the ability of Total Leaves, Sick Leaves, Casual Leaves and their balances and restriction for submitting the request if their requested days are greater than the balance.

Like this:

first time when the user creates a request (depends on Annual Leave, sick leave or any type of leave).

Total Annual Leave field will be 14 (by default) and current balance is also 14.
say if Annual Request is for 2 days
CurrAnnualLeavebalance field Column is equal to 14-2=12 days. (current balance – days requested)
next time for Annual Leave Request

Again Total Annual Leave should be 14(no change) and current balance will be 12 days
but if the next request is for 13 days.(in the form itself you can calculate (current balance – requested days)>0 then only proceed or else show message the current balance and disable the save button)

How can I acheive the above requirement? and I need to create a list of all employees with their leaves and balance columns and a ID which will match up with my LeaveRequest List (where all the LeaveRequests of all employees are saved).

Need to create a relation between the two lists to perform calculation of that particular user.

AnyHelp?

java – LibGDX How to adjust mouse aim angle calculation when screen is resized

I’m starting with a screen resolution of 1280 x 960 and that is the default resolution where mouse aiming is calculated thus:

angleRad = (float) (Math.atan2(screenX - (screenWidth / 2), screenY - (screenHeight / 2)));
angle = (float) Math.toDegrees(angleRad - Math.PI / 2);
angle = Math.round(angle) <= 0 ? angle += 360 : angle;
if (Math.round(angle) == 360)
    angle = 0;

Which all works fine. But when the game is resized to some resolution with a vastly different ratio to the default, like for example 2560 x 1440 at full screen in my case, the aiming is off. The resize method is like this:

@Override
public void resize(int width, int height) {
    game.getViewport().update(width, height);
    control.screenWidth = width;
    control.screenHeight = height;
}

Where control.screenWidth and control.screenHeight are used in the above angle calculation and as the set resolution in other areas. The angle calculation when resized is roughly correct but tends to get further and further off course the further the aim is away from the player.

If the player aims straight up, down, left or right (N, S, W or E basically) then the aim is dead on, but veers off as aiming goes away from the player towards a screen edge – no doubt due to the resolution ratio difference.

We have these ratio differences which no doubt affect the angle calculation:

1280x960 1.33 ratio
2560x1440 1.77 ratio

But I’m not sure how to take into account these ratios to adjust the angle calculation accordingly in my code?

javascript – Understand how transaction size and fees calculation works in caravan

Generally, the vsize and weight of transactions is determined by counting the byte length of the serialized transaction, and by weighting the different parts of the transactions according to the rules of segwit. As most transactions use standard scripts, the input and output sizes are known in advance and the transaction size can be easily estimated from the count of inputs and outputs (and their script types). Since ECDSA signatures can have a slightly variable size, the transaction weight is usually estimated by using the largest possible weight (safest), or the expected weight. Signature size can be homogenized by using signature grinding.

According to its package.json, Caravan uses bitcoinjs-lib.
bitcoinjs-lib is an established JavaScript library for address and transaction related tooling.

The code for calculating the weight of a transactions in bitcoinjs-lib is found in the Transaction class, specifically in the Transaction.weight() function. After the weight has been determined, the fee is calculated by weight and fee rate. As far as I am aware, bitcoinjs-lib does not provide its own fee rate estimates, so Caravan would likely rely on one of the many publicly available APIs to get a fee rate estimate.

BitGo provides an opensource library called unspents that calculates the virtual size of transactions in dimensions.ts. It’s inspired by bitcoinjs-lib but has a special focus on multsig inputs. (Disclaimer: I have contributed code to unspents.)

There are similar libraries for other programming languages to provide tools for creating Bitcoin transactions and addresses.

strategy – Discount calculation pattern

I am implementing a discount calculation model. One item PER order.
I do have a Product class:

public class Product
{
    public string Name { get; set; }
    public Size Size { get; set; }
    public decimal Price { get; set; }
}

I do have an Order class as well:

public class Order
{
    public DateTime Date { get; set; }
    public string ProductName { get; set; }
    public Size Size { get; set; }
}

I have a few discounts:

  1. Competitive discount – chooses lowest price amongst all products by size. For example, if Latte S size is 2$ and Expresso S size is 3$ then Expresso S size will cost You 2$, 1$ discount will be applied.
  2. Quantity discount – for x amount one is free in the time period. For example, I buy 2x Latte and 3rd is free and this discount can only be applied daily.

During my implementation, I wanted to use Strategy pattern but I have few nuances:

  1. A Competitive discount is not dependant on the time period, but Quantity discount is.
  2. Quantity discount is dependent on the quantity selected, competitive is not (however I could provide value as 1 in implementation, so it would be applied for every item).

Solution:

IDiscount.cs

public interface IDiscount
{
    public bool IsApplicable(Order order, int itemsCountForDiscount);
    public decimal GetDiscountAmount(Order order);
}

CompetitiveDiscount.cs

public class CompetiveDiscount : IDiscount
{
    Size Size { get; set; }
    decimal LowestProductPrice { get; set; }
    public CompetiveDiscount(Size size)
    {
        Size = size;
        LowestProductPrice = GetLowestProductPriceBySize(size);
    }

    public decimal GetDiscountAmount(Product product)
    {
        return product.Price - LowestPriceProvider;
    }

    public bool IsApplicable(Product product, int itemsCountForDiscount)
    {
        //returns if applicable
    }

    private decimal GetLowestProductPriceBySize(Size size)
    {
        //returns lowest price of products
    }
}

QuantityDiscount.cs

public class QuantityDiscount: IDiscount
{
    string ProductName { get; set; }
    int ItemsCountForDiscount { get; set; }
    Period Period { get; set; } //enum for period

    public QuantityDiscount(string productName, int itemsCountForDiscount, Period period)
    {
        ProductName = productName;
        ItemsCountForDiscount = itemsCountForDiscount;
        Period = period;
    }

    public decimal GetDiscountAmount(Product product)
    {
        return product.Price;
    }

    public bool IsApplicable(Product product, int itemsCountForDiscount)
    {
        //returns if applicable
    }
}

My question would be how could I make this model work if I do need Date in one of the discount, but in another I don’t. Is there any Design Patter I could use it? Or any tips in general.

How are the cpu cores distributed to each kernel in parallelization calculation?

Just want to make sure that I understand correctly before I ask questions. I saw some people saying that some functions in Mathematica will automatically use multi-cores (I am not referring to those ones that we parallelize, but referring to those like NIntegrate), so I think if I have 2 cores, it will be faster than single core. So my questions is if I have a code like following:
ParallelTable(NIntegrate(x, {x, 1, 3}), {loop, 1, 3})

I think three kernels will be launched. If I have 4 cores, how are these four cores distributed to each kernel? (Since I think each kernel can use multi-cores based on the property of function NIntegration)

linear algebra – Matrix calculation and calculus in general form

Can Mathematica do the followings? (Note that dimension of matrices are given by general parameters like $m,n$)

  1. Can it do matrix calculation in general form? For example solving the following equation $XA=B$ where $A,B$ are matrix and $X$ is a vector. The solution is $X=A^{-1}B$. The goal is basically to derive some equations in general form. If it cannot do that, any form of Collect function can be useful. At least being able to collect the following expression with respect to $X$: $2X+AX$ (a formula like Collect$(2X+AX,X)$). The answer should be $(2+A)X$
  2. Can Mathematica do matrix calculus as well? For example taking derivative with respect to $X$ for $X’AX$ for which the answer is $(A+A’)X$

Note that I prefer a general form. One might for example define the matrix by $A=(a_{11},a_{12},…)$, but then the inverse would be a messy expression.

if these are not possible is there any other software which can do something like that?

dnd 5e – How to interpret “round up or down” in monster CR calculation

In the “Creating Quick Monster Stats” section of the DMG (p.274) we are given the procedure for determining the CR of a new DM-designed monster. Step 4 of that procedure (pp.274-275) tells us to calculate a defensive challenge rating, an offensive challenge rating, and then the

Average Challenge Rating. The monster’s final challenge rating is the average of its defensive and
offensive challenge ratings. Round the average up or down to the nearest challenge rating to determine your monster’s final challenge rating. For example, if the creature’s defensive challenge rating is 2 and its offensive rating is 3, its final rating is 3.

How are we to take the instruction to “round up or down”?

Does the DMG mean to say that it is DM’s choice (rather than defined procedure) whether to round up or down? And that once that decision is made you move to the nearer CR in that direction?

Or, is this passage saying that the rounding should take you to the nearest CR, regardless of whether this means rounding up or down? For example, if the DCR was 1/2 and the OCR was 2, the average CR would be 1.25, which we would round down to 1, because 1.25 is nearer to 1 than 2. But if the DCR was 3 and the OCR was 1/4, the average CR would be 1.625, and we would round up to 2 because 1.625 is nearer 2 than it is 1.

The example then given shows rounding up, but in the confusing case of 2.5 being equidistant from both 2 and 3, which doesn’t let us parse which of the two possible meanings is intended.

There are a number of CR-calculation questions on this site, but I haven’t found this specific question. I understand that the final CR is by DM fiat, involves many other considerations, and is not a direct result of this specific procedure – I am just trying to understand what the actual procedure described in this passage is.

mathematical finance – CAIA PREPARATION: Bootstrapping: Calculation of the spot rate based on annual compounding, semi-annual compounding and continuous compounding

A six-month zero-coupon bond has a price of USD 97, while a 12-month 7.00 Percent annual coupon bond (paid semiannually) has a price of USD 100.50. Both bonds have a face value of USD 100. Find the 12-month spot rate based on annual compounding, semiannual compounding, and continuous compounding:

Step 1: 6 month cashflows are worth 97 Percent of face value –> first coupon = 0.97 x USD 3.5 = USD 3.395
Step 2: 12 month coupon bond is worth USD 100.5 – USD 3.395 = USD 97.105
Step 3: face value including semiannual coupon = USD 100 + USD 3.5 = USD 103.5
Step 4: 12 month cash flows are worth USD 97.205 / USD 103.5 = 0.93821 of face value
Step 5: How do I calculate the Spot Rates based on annual compounding, semiannual compounding, and continuous compounding given the discount factor of 0.93821?

Solution: The one-year spot rate can be expressed as 3.98 Percent annually compounded, 3.94 Percent compounded semiannually, or 3.90 Percent compounded continuously. How do I get these values?

Calculation of limit of integral

Calculate the limit

$$lim_{n to infty} int_{frac{1}{n}}^1 frac{n+x}{n(x^2+2)}sinbigg(frac{npi-2x}{2n}bigg)dx.$$