database – Job and Employee Performance Tracking with Product Safety

We are looking to connect a WordPress site to our database where an employee can scan an order barcode with an RFID scanner that was generated from our database and have the WordPress site query the order, pulling into a form or table the product names on the order as well as the safety information of those products from specific tables/columns in SQL, then when the employee is done with the order they select done in WordPress so it records a start time based on when they scanned the order barcode and then a finished time that records when they mark as completed as well as a place to scan an employee barcode so it logs the employee name.

Looking for an easy way to do this without extensive programming knowledge.

query performance – MySQL right join with tables of rows in 100000s very slow

The following join is taking far too long in MySQL, almost half a day and it’s still not finished:

INSERT INTO aaaaa SELECT * FROM chosenLang RIGHT JOIN chosenLOCC_P ON SUBSTRING_INDEX(SUBSTRING_INDEX(SUBSTRING_INDEX(,’.’,1),’ ‘,1),’*’,1) LIKE concat(chosenLOCC_P.code,chosenLOCC_P.reference)

chosenLang has 413826 rows, 42 columns (38 of them longtext, 3 of them varchar, 1 of them int(11)). call is varchar.
chosenLOCC_P has 117077 rows, 23 columns (21 of them longtext, 2 of them int(10)).

What can I do to maximise the speed of this query?

metrics – how to measure software performance

I have to study software metrics for a competition, I’ve found a lot of things but I’m really confused, could you suggest me which metrics are used to measure performance in software systems and when you should prefer one to another?

query performance – Indexing a date column in MySQL doesn’t work if a date function is used

I have a table with the following schema


id int
task_id int
completed_date datetime (INDEX IX_TBL_NAME_COMPLETED_DATE)

I run EXPLAIN on this query

    FROM table_name TBL
        TBL.completed_date BETWEEN date1 AND date2

This query runs and fetches records using the index on completed_date

While running the same query with a date function

    FROM table_name TBL
        CONVERT_TZ(TBL.completed_date, timezone1, timezone2) BETWEEN date1 AND date2

The index isn’t made use of resulting in a slow query.

Can someone explain the reason behind this behavior and solution to optimize queries like this?

performance tuning – Writing compiled functions as fast as Python’s Numba

I want to write some code to simulate a damped oscillator that is just as fast as code written using Numba’s @njit decorator. I’ve written the mathematica code and it is 20-40x slower than the python code written by YouTuber Jack of Some.

Here is the code from Jack of Some’s video on speeding up Python code with Numba; I’ve changed it a bit to run in just one jupyter cell:

import numpy as np
from numba import jit, njit, types, vectorize

def friction_fn(v, vt):
    if v > vt:
        return - v * 3
        return - vt * 3 * np.sign(v)

def simulate_spring_mass_funky_damper(x0, T=10, dt=0.0001, vt=1.0):
    times = np.arange(0, T, dt)
    positions = np.zeros_like(times)

    v = 0
    a = 0
    x = x0
    positions(0) = x0/x0

    for ii in range(len(times)):
        if ii == 0:
        t = times(ii)
        a = friction_fn(v, vt) - 100*x
        v = v + a*dt
        x = x + v*dt
        positions(ii) = x/x0
    return times, positions

_ = simulate_spring_mass_funky_damper(0.1)

%time _ = simulate_spring_mass_funky_damper(0.1)

The output is

CPU times: user 1.38 ms, sys: 337 µs, total: 1.72 ms
Wall time: 1.72 ms

vs my Mathematica code

ClearAll(friction, simulateSpring, jacksSpring);

friction = Compile({{v, _Real}, {vt, _Real}},
   If(v > vt,

simulateSpring = 
  Compile({{x0, _Real}, {t, _Real}, {dt, _Real}, {vt, _Real}},
   Module({(Tau), times, positions, v = 0.0, a = 0.0, x = x0},
    (Tau) = t;
    times = Range(0.0, t, dt);
    positions = ConstantArray(0.0, Length@times);
    positions((1)) = x0/x0;
        (Tau) = times((i));
        a = friction(v, vt) - 100*x;
        v = v + a*dt;
        x = x + v*dt;
        positions((i)) = x/x0;
     {i, 2, Length@times});
    {times, positions}
jacksSpring(x_) := simulateSpring(x, 10.0, 0.0001, 1.0);

Print("CPU time: ", Timing(jacksSpring(0.1))((1))*1000, " ms")

from which we have

CPU time: 27.703 ms

performance tuning – Is it possible to interrupt a computation and give partial results?

Note: this solution here works for your second case: “Or alternatively the program starts a new computation (in my case very long lists) and the program ends after obtaining the first 1000 (or another number) of results”. For your first case (interrupting a computation) you can follow this instructions (depending if you use Windows/Mac and if you’re in a notebook).

Suppose you generate a list of integers that grows by adding an element if a random real is above 0.2. Thus, you cannot know how long will it take for your list to have 1000 elements, but you could check its length every time it is updated, and stop it there.

For instance:

(*How many elements in your computation?*)
myDesiredLength = 1000;

(*What's the intitial state/length of your computation?*)
myList = {1};

(*Update your computation according to whatever process you do, using While,
so it stops when you reach your desired length*)

i = 1;
Length(myList) <= myDesiredLength,
If(RandomReal() > 0.2, 
AppendTo(myList, myList((-1)) + 1));
i++) // AbsoluteTiming

I don’t think in general you can ask a computer beforehand how much time some computation will take, or if it will ever finish (this is the famous Halting problem). You could do some estimates, but it also depends on your particular computation (as some calculations take linear time, others exponential time, etc), what else your computer is doing, etc, etc. So, I’d better recommend giving your program some condition to stop beforehand, whether by Table, Do, While, For, etc, (and use some conditionals if needed).

ipsec – Performance comparison between AES256 GCM vs AES 256 SHA 256

I understand GCM Crypto uses ESP Encryption only for ESP and Authentication algorithm.
whereas AES 256 SHA256 uses AES for ESP Encrypt and SHA256 for Auth algorithm.

Could someone help clarify the reason of getting a better performance with AES256GCM as compared to ASE256SHA256.

performance – Is it a bad idea to store all textures in linear color-space?

I am wondering about the performance behind color-space conversions for textures during reading.

If a project decides to go with a linear workflow, as in, when doing any sort of color manipulation, like lighting, they operate on within the linear color-space, is there any good reason to not store all texture assets in the linear color-space?

My understanding is that when loading textures which are stored in sRGB, the GPU can do the conversion to linear for you very quickly. Is that cost completely negligible? Why not just store all the textures in that form?

The only reason I can think against this is that you may want to store the textures in a color-space that looks like the one you intend to display to (if you’re going to display to sRGB then you should store the textures in sRGB). In that case, why not master the texture as sRGB but then store them on disk already converted to linear colorspace?

performance – Time to FCP changes based on order of and tags

I was troubleshooting why one particular page took a full second longer to reach FCP compared to similar pages for mobile according to Google’s Page Insights. The difference ended up being the order of two tags at the beginning of the page.

3.4 seconds to FCP

<p>some paragraph</p>

versus 2.5 seconds to FCP

<p>opening paragraph</p>

In the first scenario, Google Page Insights gave a logo from the header as the FCP content. But with the second scenario, Google would display either text from the <p> or <h2> tag for the FCP content.

Why would the order of these two simple tags change the FCP content from text to an image header?

performance – Write/saved logs in WordPress

I´m doing a plugin which includes a query to import data from external API into custom post types/taxonomies/medias etc…
The query can be really heavy.
I had to make logs to know exactly what happens.
For the moment I´m saving logs thanks to update_option() to avoid loose logs when I have these kinds of errors :

  • 503 Service Unavailable errors
  • timeout server error
  • php errors
    But, I got a new error :

mysqli_real_connect(): (HY000/2002): Connection refused

Maybe because there are too many action with the database.
I changed define('DB_HOST', 'localhost'); by define('DB_HOST', ''); without any success.

If I´m using a log file, I will have to open/write/close the log file all the time and the algorithme would need more time. That could create a timeout server error.

What could be the best way in my case ?